A Formal Framework for Distributed Cyber-Physical Systems

Composition is an important feature of a specification language, as it enables the design of a complex system in terms of a product of its parts. Decomposition is equally important in order to reason about structural properties of a system. Usually, however, a system can be decomposed in more than one way, each optimizing for a different set of criteria. We extend an algebraic component-based model for cyber-physical systems to reason about decomposition. In this model, components compose using a family of algebraic products, and decompose, under some conditions, given a corresponding family of division operators. We use division to specify invariant of a system of components, and to model desirable updates. We apply our framework to design a cyber-physical system consisting of robots moving on a shared field, and identify desirable updates using our division operator.


Introduction
We present a design framework for cyber-physical systems that enables reasoning about composition and decomposition. It is common, when modeling a system, to separate its design from its implementation. A design framework employs high level primitives to simplify the specification of what behavior is desirable. A formal design, moreover, enables high level analysis that benefits later implementation (e.g., proof of the existence of an implementation, verification of safety properties, etc.). In contrast, an implementation uses low level operations to specify how a behavior is constructed. An implementation provides a precise description of the system's behavior that can be tested, simulated, and under some conditions, proved correct with respect to the design specification.
Composition is the act of assembling components to form complex systems. A formal model's ability to support such construction is a particularly desirable property if the underlying parts have different types of specifications (e.g., continuous versus discrete), but still need to communicate and interact. Our work in [12] presents a component model that captures both discrete and continuous changes, for which timed-event sequences (TESs) form instances of a component behavior. An observation is a set of events with a unique time stamp. A component has an interface that defines which events are observable, and a behavior that denotes all possible sequences of its observations (i.e., a set of TESs). The precise machinery that generates such component is abstracted away. Instead, we present interaction between components as an algebra on components, that includes a wide variety of user defined operations.
Decomposition is dual to composition, as it simplifies a component behavior by removing some of its parts. Decomposition is interesting in two ways: it gives insight on whether a system is a composite that includes a specific component, and it returns a subsystem that, in composition with that component, would give back the initial system. Decomposition is not unique, and may induce a "cost" or "measure" on its alternatives, i.e., a component A may be seen as a product B × C or B × D with C = D, where by some measures, the cost of using C or D may differ. While the observable behavior may not change, i.e., the set of sequences of observations stays the same, the substitution of a component with another may somehow improve the overall system, e.g., by enhancing its efficiency. For instance, running time is often omitted when specifying systems whose behaviors are oblivious to time itself. However, in practice, the time that a program takes to process its inputs matters. Thus, a component may be substituted with a component exposing the same behavior but running faster. Other criteria such as the size of the implementation, the cost of the production, procurement, maintenance, etc., may be considered in changing one component for another. In this paper, we also consider an orthogonal concern: the cost of coordination. Intuitively, the cost of coordination captures the fact that events of two components may be tightly related. For example, if two events are related, the occurrence of an event in one component implies the occurrence of some events in another component. While such constraints are declarative in our model, their implementation may be costly. Thus, the relation between observable events of two components may increase the underlying cost to concurrently execute those two components. Finally, having an operation to study system decomposition brings an alternative perspective on fault detection and diagnosis [10].
Formally, we extend our algebra of components [12] with a new type of operator: a division operation. Division intuitively models decomposition, and acts as an inverse composition operation. Practically, the division of component A by component B returns one component C from all components D such that A = B×D. Different cost models give rise to different operations of division by picking specific alternatives for D. We abstractly reason about cost using a partially ordered set of components and show that, for some orders, the set of candidates naturally gives rise to a maximal (minimal) element.
As a running example, we consider a set of robots moving continuously on a shared field. We use the operation of division to specify desirable updates that would prevent robots from interfering with other robots. We also apply division to find simpler components that, if used, would still preserve the entire system behavior. We finally specify the necessary coordination for the robots to self sort on the field.
We summarize our contributions in this paper as:.
• an extension of the algebra of components with a division operator; • a result on natural ordering of components; • application of division for specifying valid updates of components.

Preliminaries
A Timed-Event Stream (TES) σ over a set of events E is an infinite sequence of observations, where its i th observation σ(i) = (O, t), i ∈ N, consists of a pair of a set of events O ⊆ E, called the observable, and a positive real number t ∈ R + as time stamp. A TES has the additional properties that its consecutive time stamps are monotonically increasing and non-Zeno, i.e., if σ(i) = (O i , t i ) is the i th element of TES σ, then (1) t i < t i+1 , and (2) for any time t ∈ R + , there exists an element σ(i) = (O i , t i ) in σ such that t < t i . We use σ ′ to denote the derivative of the stream σ, such that σ ′ (i) = σ(i + 1) for all i ∈ N. We refer to the stream of observables of σ as its first projection pr 1 (σ) ∈ P(E) ω , and the stream of time stamps as its second projection pr 2 (σ) ∈ R ω + , such that σ(i) = (pr 1 (σ)(i), pr 2 (σ)(i)) for all i ∈ N. We write σ(t) = O if there exists i ∈ N such that σ(i) = (O, t), and σ(t) = ∅ otherwise. We use dom(σ) to refer to the set of observable time stamps, i.e., the set dom(σ) = {t ∈ R + | ∃i .pr 2 (σ)(i ) = t}.

Components
A component in our model does not include the detail of how elements of its behavior are constructed, but specifies only what sequences of observations over time are acceptable. Practically, there might be different ways to construct the same parts of a component behavior, but we abstract away the details of such implementation as unobservable. We use TES (E ) to denote the set of all TESs over the set of events E. ) is a pair of a set of events E, called its interface, and a behavior L ⊆ TES (E ).
Given component A = (E A , L A ), we write σ : A for a TES σ ∈ L A . In order to demonstrate the applicability of our framework, we model a robot, a field, and a protocol as components. We show how components capture both cyber and physical aspects of systems.
Example 1 (Roaming robots). We capture, as a component, sequences of observations emerging from discrete actions of a robot at a fixed time frequency. For simplicity, we consider that the robot can perform actions of two types only: a move in a cardinal direction, and a read of its position sensor. A move action of robot i creates an event of the form d (i , p) where d is the direction, and p is the power required for the move. The read action of robot i generates an event of the form read (i , (x ; y)) where (x; y) is a coordinate location.
Formally, we write R(i, T, P ) = (E R (i, P ), L R (T )) for the robot component with identifier i with E R (i, P ) the set {S(i, p), W (i, p), N(i, p), E(i, p), read(i, (x; y)) | x, y ∈ −20, 20 , p ≤ P } and L R (T ) ⊆ TES (E R (i , P)) be such that all observations are time stamped with a multiple of the period T ∈ R + , i.e., for all σ ∈ L R (T ), if (O, t) ∈ σ then there exists k ∈ N such that t = k · T . The component R(i, T ) therefore captures all robots whose directions are restricted to S(outh), W(est), N(orth), and E(ast), whose power is limited to P , and whose location values are integers in the interval −20, 20 . The robot stops whenever p = 0.
In Table 1, we display the prefix of one TES from the behavior of three robot components. Note that each line corresponds to a time instant, for which each robot may or may not have observed events. The symbol '−' represents no observable, while otherwise we show the set of events observed. The time column is factorized by the period T , shared by all robots. Thus, at time 3 · T , robot R(1, T, P ) moves west, while robot R(2, T, P ) moves north, and robot R(3, T, P ) moves east, all with required power p.
In the robot component of Example 1, observations occur at a fixed frequency. For some physical components, however, observations may occur at any point in time. For instance, consider the field on which the robot moves. Table 1: Three prefixes of timed-event sequences for R(1, T, P ), R(2, T, P ), and R(3, T, P ), where T and P are fixed, and each move action consumes the same power p.
Each time a robot moves may induce a change in the field's state, and the field's state may be observable at any time or frequency. Internally, the field may record its state changes by a continuous function, while restricting the possibilities of the robots to move due to physical limitations. We describe the field on which the robot moves as a component, and we specify, in Example 5, how robot components interact with the field component.
Example 2 (Field). The field component captures, in its behavior, the dynamics of its state as a sequence of observations. The collection of objects on a field is given by the set I. The state of a field is a triple (((x; y) i ) i∈I , ( − → v i ) i∈I , t) that describes, at time t ∈ R + , the position (x; y) i and the velocity − → v i of each object in I. We model each object in I by a square of dimension 1 by 1, and the coordinate (x; y) i represents the central position of the square. We use µ = (((x 0 , y 0 ) i ) i∈I , ( − → v 0i ) i∈I , t 0 ) as an initial state for the field, which gives for each robot in i ∈ I a position and an initial velocity. Note that static obstacles on the field can be modeled as objects i ∈ I with position (x; y) i and zero velocity.
Formally, the field component is the pair F µ (I) = (E F (I), L F (I, µ)) with and event (x; y) i displays the location of object i on the field. The set L F (I, µ) ⊆ TES (E F (I )) captures all sequences of observations that consistently sample trajectories of each objects in I, according to the change of state of the field and the internal constraint. As a physical constraint, we impose that no two objects can overlap, i.e., for any disjoint i, j ∈ I and for all time t ∈ R + , with (x; y) i and (u; v) j their respective positions, then [x−0.5, x+0.5]∩[u−0.5, u+0.5] = [y −0.5, y +0.5]∩[v−0.5, v+0.5] = ∅. Even though the mechanism for such a constraint is hidden in the field component, typically, the move of a robot is eventually limited by the physics of the field. Remark 1. There is a fundamental difference between the robot component in Example 1 and the field component in Example 2. The robot component has an adequate underlying sampling frequency which prevents missing any event if observations are made by that frequency. However, the field has no such frequency for its observations, which means that there may be another intermediate observation occurring between any two observations. In [12], we capture, as a behavioral property, the property for a component to interleave observations between any two observation.
Example 3 (Protocol). As shown in Example 2, physics may impose some constraints that force robots to coordinate. A protocol is a component that, for instance, coordinates the synchronous movement of a pair of robots. For example, when two robots face each other on the field, the swap(i, j) protocol moves robot R(i, P, T ) north, west, and then south, as it moves robot R(j, P, T ) east. Note that the protocol requires the completion of a sequence of moves to succeed. Another robot could be in the way, and therefore delay the last observables of the sequence. The swap component is defined by Components are declarative entities that may denote either the behavior of a specification, or the behavior of an implementation. The usual relation between the behavior of a program and the property of such program constitutes a refinement relation. Proof. See Appendix A.
An alternative to refinement is containment. The containment relation uses a point-wise inclusion relation on observations of two TESs. The containment relation on components requires that every TES in the behavior of one is point-wise contained in a TES from the behavior of the other.
We extend the containment relation to components: and only if E A ⊆ E B , and for every σ ∈ L A , there exists a τ ∈ L B such that σ ≤ τ . Lemma 2. The relation ≤ is a pre-order over arbitrary set of components. Let C be a set of components such that, for all components A ∈ C and for any two TESs σ : A and τ : A, (σ ≤ τ ) =⇒ σ = τ , then ≤ is a partial order on C.
Proof. See Appendix A.
Remark 2. The restriction in Lemma 2 to consider components with no internal self containments between distinct TESs is necessary for having ≤ as a partial order. Consider for instance a component A with only two TESs in its behavior, σ : A and τ : A where pr 1 (σ) = ({a, b}) ω and pr 1 (τ ) = ({a}) ω and pr 2 (σ) = pr 2 (τ ). Let B be a component with a singleton behavior δ : B such that pr 1 (δ) = ({a, b}) ω and pr 2 (δ) = pr 2 (σ). Then, A ≤ B, and B ≤ A, but A = B.

Algebraic product
A complex system typically consists of multiple components that interact with each other. In [12], we define a family of binary products acting on components, each parametrized with an interaction signature that captures the type of the interaction between a pair of components. As a result, the product of two components, under a given interaction signature, returns a new component whose behavior is obtained from constraining each operand component behavior according to the interaction signature. An interaction signature consists of two elements: a composability relation and a composition function.
A composability relation captures what TESs are composable. Intuitively, when no constraints are imposed on occurrences of events in E A and E B , the identity composability relation allows every pair of TESs to compose. Alternatively, if some events are shared between E A and E B , only observations in σ and τ that agree on the occurrence or exclusion of shared events may compose, e.g., as in Example 4. Also, R may capture some events that cannot occur at the same time (e.g., the encapsulation operator of ACP [5]). A composability relation defines which pairs of TESs can compose, but a product operator must also specify how to produce a new TES from a pair of composable TESs.
A composition function ⊕ : forms the composition σ ⊕ τ of two TESs σ and τ . Intuitively, the zipping and item-wise union of the two TESs σ and τ is one possible such composition function (such as ∪ in Example 4). As well, one may construct a new event as the result of the simultaneous occurrence of some events (e.g., the communication function of ACP [8]). Table 2: Three prefixes of timed-event sequences for R(1, T, P ), R(2, T, P ), and R(3, T, P ), together with the prefix resulting from forming their synchronous product with the swap protocol. Initially, be two components. We use Σ = (R, ⊕) to range over interaction signatures, with R a composability relations and ⊕ a composition functions.
For simplicity, we write × as a general product with arbitrary interaction signature Σ.
Remark 3. In general, composability relations on TESs are derived from composability conditions on observations. In [12], we use a co-algebraic procedure to lift constraints on observations to constraints on TESs. Such a scheme provides a wide range of possibilities for user-defined products, and a co-inductive proof principal for interaction signature equivalence.
A useful interaction signature Σ is the one that synchronizes shared events between two components. We write Σ sync = (R sync , ∪) for such interaction signature, and give its specification in Example 4.
Example 4 (Synchronous signature). In this example, we define the synchronous interaction signature Σ sync = (R sync , ∪). In a cyber-physical system, the action (of a cyber system) and the reaction (of a physical system) co-exist simultaneously in the same observation, and are therefore synchronous.
First, we define ∪ that, given two TESs, returns the interleaving or the union of their observations, i.e., Then, R sync (E 1 , E 2 ) relates pairs of TESs such that all shared events occur at the same time in both TESs, i.e., (σ, τ ) ∈ R sync (E 1 , E 2 ) if and only if, for all time stamps The synchronous interaction signature Σ sync = (R sync , ∪) leads to the product × Σsync that forces two components to observe shared events at the same time. We write ⊲⊳ for such product. As a result, R(1, P, T ) ⊲⊳ R(2, P, T ) ⊲⊳ R(3, P, T ) ⊲⊳ swap(2, 3) captures all sequences of moves for the robots constrained by the swap protocol.
Example 5 (Field-Robot signature). The interactions occurring between the field and the robot components impose simultaneity on some disjoint events. For instance, every observation of the robot containing the event d(i, p) ∈ E R (i, P ) must occur at the same time as an observation of the field containing the event move(i , returning the velocity as a function of direction d and required power p. Also, every observation containing the event read (i , (⌊x ⌋, ⌊y⌋)) ∈ E R (i , P) must occur at the same time as an event (x; y) i ∈ E F (I) where ⌊z⌋ gives the integer part of z.
Formally, we capture such interaction in the interaction signature Σ RF = (R RF , ∪), where R RF is the smallest symmetric relation defined as for all . As a result, the product (R(1, T, P ) ⊲⊳ R(2, T, P ) ⊲⊳ R(3, T, P )) × Σ RF F µ (I) captures all sequences of observations for the three robots constrained by the field component. Remark 5. In practice, it is unlikely that two observations happen at exactly at the same time. However, in our framework, the time of an observation is an abstraction that requires every event of the observation to occur after the events of the previous observation, and before the events of the next observation.
Definition 5 (Monotonicity). Let × be a commutative product. Then, × is monotonic if and only if, for B ⊑ A and for any C, we have B ×C ⊑ A×C. Remark 6. Monotonicity shows that the inclusion of component's behavior is preserved by product. Let A and B be two components such that B ⊑ A. Suppose that P is a component that models a property satisfied by component A and preserved under product with a component C, then P is satisfied by component B and component B × C, by monotonicity. Note that the definition of monotonicity assumes × to be commutative. That assumption can be relaxed by defining left and right monotonicity.
Proof. See Appendix A.

Division and Conformance
Consider two components B and C, and a product × over components that models the interaction constraints between B and C. We use = on components as strict structural equality: In general, a component D that can substitute for C is not unique. The set of alternatives for C depends, moreover, on the product ×, on the component B, and on the behavior of A. A 'goodness' measure may induce an order on this set of alternative components, and eventually give rise to a best substitution candidate. More generally, the problem is to characterize, given two components A and B and an interaction product ×, the set of all components C such that A = C × B.

Divisibility and quotient
The divisibility of a component A by a component B under a product × captures the possibility to write A as a product of B with another component.
A is divisible by B under × when A is both left and right divisible by B under ×. Intuitively, the set of witnesses for divisibility, contains every component whose product (under the same interaction signature) with the divisor yields the dividend. We call such witnesses quotients.
We define left (right) division operators that pick, given a choice function 1 , the best element from their respective sets of quotients as their quotients.
The right (respectively, left) quotient of A divided by B, under the product × Σ and the choice function χ over the right (respectively, left) quotients, is the element Example 6 (Lowest element). One measure that can impose an order on a set of quotients uses the fact that a component may contain all behavior of another. Indeed, every quotient has the property that, in composition with the divisor, yields the dividend. Then, a quotient whose behavior is fully contained in that of every other quotient may be optimal in terms of behavior complexity. Let C, the set of right (left) quotients for A divisible by B for product ×, be equipped with an ordering such that the lowest element is an element of C, then a function that picks the lowest element can act as a choice function to define the result of the division of A by B.
One may consider ≤ as a natural ordering on quotients. However, the set of quotients equipped with the containment relation may not have a lowest element. One such example is shown in Table 3. Consider A, B, C, and D with {0, 1, 2}, {0, 1}, {0, 2}, and {1, 2} as interface, respectively. Using Table 3: Counter example for a lowest element in the division of A by B, with C and D two quotients.
the synchronous composition operation, the TESs τ and η compose with the TES δ to give the TES σ. However, C and D require synchronization on their shared event to compose with B. A smaller component than C and D would be a component F , whose interface is the singleton set containing event 2. However, such component has no shared event with B, and may therefore freely interleave its observations, which does not correspond with observations in A. Thus, F is not an element of the quotients, and C and D have no lower bound in the set of quotients. We show in the next theorem that a subset of quotients with a shared interface has a lower bound. We discuss how the choice of an interface for a quotient may be guided by some qualitative design choices.

Theorem 1. Let ≤ be the containment relation introduced in Definition 3.
Let × Σ be a commutative, associative, and idempotent product on components, and such that for any two components C and D with the same interface, C × Σ D ≤ C. Given A divisible by B under × Σ , any finite subset of quotients sharing the same interface E has a lower bound that is itself a quotient in A/ * Σ B. Proof. Let C(E) be a finite subset of the set {C | C has interface E and C ∈ A/ * Σ B}. We also write × Σ C(E) for the product of all components in C(E).
For any C ∈ C(E), we have Given associativity, commutativity, and idempotency of × Σ , for any C 1 , C 2 ∈ C(E): which, applied over the set C(E), gives A = B× Σ (× Σ C(E)). Thus, × Σ C(E) ∈ C(E). 2 When conditions of Theorem 1 are satisfied, we write A/ ≤,E Σ B for the lower bound of the set of quotients with interface E.
Remark 7. The operation of division defined by Theorem 1 raises several points for discussion. First, the set of quotients sharing the same interface is structured. Indeed, when the interface is fixed, each finite subset of quotients has a lowest element under ≤, which makes the definition of a division operator possible. Second, the fact that there is, in general, no minimal element over the set of all quotients reveals the important role that interfaces play in system decomposition. In other words, one may consider another measure to choose a quotient interface, that is orthogonal to behavior containment (see Section 5 for a discussion about the cost of coordination).
We use 1 to denote the component (∅, TES (∅)), and 0 to denote the component (∅, ∅), that has the empty interface and no behavior.
In order to reason about components algebraically, we want some properties to hold. For instance, that a component is divisible by itself and the set of quotients contains the unit element.
denotes the component that, in composition with P , recovers the initial system. Note that the component resulting from division ranges over the interface E ′ . As a consequence, all events (n, m) i have been hidden in the quotient. Note that the division exists due to the interaction signature Σ RF that imposes simultaneity on occurrence of events read ((n, m), i ) and (n, m) i .

Conformance
The criterion for divisibility of A by B, under product ×, is the existence of a quotient C such that B × C = A. The equality between B × C and A makes division a suitable decomposition operator. We can define, a similar operation to describe all components C that coordinate B in order for the result to behave in conformance with specification A. In this case, we replace equality with the refinement relation of Definition 2. We consider in the sequel non-empty coordinators, i.e., components different from 0 = (∅, ∅).

Definition 10 (Right (left) conformance coordinators). The set of right (respectively, left) conformant coordinators that make B behave in conformance with
Similarly to the set of quotients, the set of coordinators having the same interface is structured and gives ways to define non-trivial coordinators, as in Theorem 2.
Definition 11 (Right (left) principal coordinator). Let B be conformable to component A, and let χ be a choice function that selects the best component out of a set of components. The right (respectively, left) principal coordinator that makes B behave in conformance with A, denoted as Example 8 (Greatest element). One measure that can impose an order on a set of coordinators uses containment. The refinement relation used to define conformance also accepts coordinators that have no behavior at all, and trivially satisfies the behavior inclusion relation. Maximizing the observables of the resulting composite behavior set, corresponds to finding the greatest coordinator under a containment relation. More generally, if C, the set of right (left) coordinators that make B conformant to A under ×, is equipped with an ordering such that the greatest element is an element of C, then the function that picks the greatest element can act as a choice function to select the best conformance coordinator of B to behave as A under ×.
Following the result of Theorem 1, if the interface of the quotient is fixed, then the subset of quotients that have the same interface has a least element with the containment relation introduced in Definition 3. We show in Theorem 2 that a similar result holds for the set of coordinators.

Theorem 2. Let ≤ be the containment relation introduced in Definition 3.
Let × (R,⊕) be a commutative, associative, idempotent, and monotonic (as in Definition 5) product on components. Given B conformant with A under × (R,⊕) , any finite subset of coordinators sharing the same interface E has an upper bound that is itself a coordinator in A ↓ * Σ B. Finding a conformance coordinator that makes B behave in conformance with A is looser than finding a quotient for A divisible by B: any quotient of A by B under a product × Σ is therefore a coordinator that makes B conformant with A. Such quotient-coordinator has the property that it "coordinates" B such that the resulting behavior covers the entire behavior of A.
For some suitable products, Theorem 1 and Theorem 2 state the existence of, respectively, a lowest element in the subsets of quotients and a largest element in the set of coordinators that share the same interface. The synchronous product introduced in Example 4 is one product that satisfies the requirements of both theorems. We show in Section 4 how division under synchronous product can characterize valid updates of a system, and how conformance under synchronous product can characterize valid protocols.

Applications of Division
In this section, we consider the robot, field, and protocol components introduced in Examples 1, 2, and 3, together with the synchronous product ⊲⊳ of Example 4 and the product × Σ RF of Example 5. Both products are commutative (Lemma 1 in [12]), and we therefore omit the right and left qualifiers for division and conformance.
Initial conditions. For each robot, we fix the power requirement of a move and the time period T between two observations to be such that a move of a robot during a period T corresponds to a one unit displacement on the field. Then, each move action of a robot changes the location of the robot by a fixed number of units or none if there is an obstacle. We write R(i) for robot R(i, P, T ) with such fixed P and T . As an example, the observation ({d(i), read (i, (x; y))}, t) followed by the observation ({read (i , (x ′ ; y ′ ))}, t + T ) gives only few possibilities for (x ′ ; y ′ ): either (x; y) = (x ′ ; y ′ ), in which case the robot got blocked in the middle of its move, or (x ′ ; y ′ ) increases (or decreases) by one unit the x or y coordinates, according to the direction d.

Approximation of the Field as a Grid
Problem. A field component captures in its behavior the continuous responses of a physical field interacting with robots roaming on its surface. The interface of the field contains therefore an event, per object, for each possible position and each possible move. In some cases, however, only a subset of those events are of interest. For instance, we may want to consider only integer position of objects on the grid, and discard intermediate observables. As a result, such component would describe a discrete grid instead of a continuous field, while preserving the internal physics: no two objects are located on the same position. We show how to define the grid as a subcomponent of the field, using the division operator.
Definition of the grid. We use division to capture a discrete grid component We use the component C = (E G (I), TES (E G (I ))) to denote the free component whose behavior contains all TESs ranging over the interface E G (I). Then, by application of Theorem 1, we use the least element with respect to ≤ of the set of quotients of C × Σsync F µ (I) divided by F µ (I) under Σ sync to define the grid. Thus, which naturally emerges as a subcomponent of the field component F µ (I).
Consequences. The grid component inherits some physical constraints from the field F µ (I), but is strictly contained in the field component. There is a fundamental difference between an approximation of the position as a robot sensor detects, and a restriction of the field to integer positions as in the grid component. In the former, the component reads a value that does not corresponds precisely to its current position, while in the latter, the position read is exact but observable only for integer values. As a result, the two component expressions (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I) and (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF F µ (I) restrict each robot behavior in different ways: the grid component allows discrete moves only and the position that a robot reads is the same position as that of an object on the grid, while the field component allows continuous moves but the position that a robot reads is an approximation of the position of the robot on the field. In the sequel, we use the grid component G µ (I) instead of the field component.

Updates of components
Problem. The interaction signature of a product operator on components restricts which pairs of behaviors are composable. As a consequence, some components may have more behavior than necessary, namely the elements that do not occur in any composable pair. An update is an operation that preserves the global behavior of a composite system while changing an operand of a product in the algebraic expression that models the composed system. The goal of such update, for instance, is to remove some behaviors that are not composable or prevent some possible runtime errors. We give an example of such update that replaces a robot component by a new version that removes some of its possibly blocking moves.
Scenario. For each robot, we fix its behavior to consist of TESs that alternate between move and reading observations. Moreover, for a robot's period T , and arbitrary n i ∈ N, we let T × n i , i ∈ N, represent the timestamp of the Table 4: Prefixes of four TESs for R(1), R(2), and R(3). For direction d and robot i, we write d(i) instead of d(i, p) since the power p is initially fixed. We omit the set notation as observations are all singletons. We consider (n i ) i∈N as an increase sequence of natural numbers. (1, (3; 0)) read (1, (3; 0)) read (2, (2; 0)) read (3, (1; 0)) (1, (3; 1)) read (1, (2; 0)) read (2, (2; 1)) read (3, (2; 0)) (1; 0)) read (2, (1; 1)) read (3, (3; 0)) i th observation of a TES in its behavior, so long as n i < n i+1 . Table 4 displays elements of the behavior for each robot. For instance, the TES η : R(1) captures the observations resulting from R(1) moving west twice. Note that, in composition with the grid component, the readings may conflict with the actual position of the robot, as some moves may not be allowed due to obstacles on the path. For instance, given the expression (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I), the TES η is not observable as it is not composable with any of the TESs τ or δ from R(2) and R(3) respectively. We show how to use division to remove all of such behaviors.
Update. The replacement for R(1) should preserve the global behavior. We use division to define an update R ′ (1) of R(1) that removes all elements from its behavior that are not composable with any element from the behavior of R(2) and R(3) under the constraints imposed by the grid.
As a result, the component contains in its behavior all elements ranging over the interface E R (1) that are composable with elements in the behavior of the dividend component. Note that the set of quotients is filtered on the interface E R (1), and R(1) trivially qualifies as a quotient. However, R(1) is not minimal as η can be removed from its behavior.
Consequence. As a consequence, we defined, using division, an update for component R(1) that removes some elements of its behavior while preserving the global behavior of the composite expression.
Note that the fact that in our example each robot alternates between a move and a read is crucial to remove, by composition, undesired behavior. Indeed, the readings of each robot must synchronize with the location displayed on the grid, and therefore implies that the robot successfully moved. The constraints imposed by the grid coordinate the robot by preventing two robots to share the same location.

Coordination and distribution
Problem. Consider the scenario previously described with an additional modification: a robot no longer observes its location after every move, but only at the end of the sequence of moves. The TESs of each robot's behavior are described in Table 5, where (x i , y i ) ranges over possible position readings for robot i. As a result, conflicts between robots may no longer be observable, and the timing of observations may render some incidents of robots blocking each other unobservable. We define a coordinator that makes the system conformant to a global property. As opposed to the division operation, a conformance coordinator may restrict the system behavior to a subset that conforms to a specified property. We consider the following property P sorted (I): "eventually, all the robots get sorted, i.e., every robot R(i) eventually ends on the grid location (i; 0).".
Global coordinator. We can define, from the sort property, a component as C sorted (I) = (E sorted (I), L sorted (I)) whose interface is the union of the interfaces of all robots and the grid, i.e., E sorted (I) = E G (I) ∪ i∈I E R (i), and whose behavior L sorted (I) ⊆ TES (E sorted (I )) contains all sequences of moves that make the robots eventually end in their respective sorted grid positions, i.e., σ ∈ L sorted (I) if and only if there exists t ∈ R + such that (O, t) ∈ σ with (i; 0) i ∈ O for all i ∈ I. Note that, by construction, the behavior of component C sorted (I) may contain some TESs from the behavior of component (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I), namely ever TESs that satisfies the property.
Consequently, the product of component C sorted (I) with (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I), under the signature Σ sync , defines a component whose behavior contains all elements in the behavior of (R(1) ⊲⊳ R(2) ⊲⊳ R(3))× Σ RF G µ (I) that are also in the behavior of C sorted (I). Therefore, if the behavior of component ((R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I)) ⊲⊳ C sorted (I) is not empty, (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I) is conformant to C sorted (I) and C sorted (I) is a principal coordinator. However, using C sorted (I) as a coordinator requires each component to synchronize, at each step, with every other component. We show how to define a different choice function on the set of coordinators, in order to identify a minimalist form of coordination.
Minimalist coordinator. We define a coordinator whose interface is strictly contained in the interface of the global C sorted (I) coordinator. More precisely, we search for a coordinator over the interface of robot R(1) that makes the system (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I) conformant to the property component C sorted (I). First, observe that the set of coordinators filtered on the interface E R (1) is empty. Indeed, for any set of timestamp factors n 1 , n 2 , n 3 , and n 4 for the observables of R(1) in Table 5, there exists an element from the behavior of R(2) that delays its first action until after n 3 , and eventually ends up in a blocking position. As a consequence, there is no coordinator restricted to the events of R(1) that makes (R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I) conformant to C sorted (I). Instead, we consider filtering the set of coordinators with the interface E R (1) ∪ {N(2)}. In this case, one can find a simple coordinator that makes the set of robots to conform with the sort property. Indeed, every observation N(1) of robot R(1) must occur after an observable N(2) of robot R(2). As a result, such coordinator C 12 restricts R(1) to move only after R(2) moves, which results in a composite system conformant to C sorted (I). Thus, the new coordinator C 12 , in product with R(1), R(2), R(3), and the grid G µ (I) satisfies the sort property, i.e., ((R(1) ⊲⊳ R(2) ⊲⊳ R(3)) × Σ RF G µ (I)) ⊲⊳ C 12 ⊑ C sorted (I).
Consequence. We showed one global coordinator for the set of robots to satisfy the sort property, and one minimalist coordinator over the interfaces of E R (1) with an event from E R (2). The minimalist coordinator has an interface strictly included in the global coordinator, which therefore minimizes the amount of interaction among components. In the next section, we discuss the cost of coordination as a possible measure to order a set of conformance coordinators.

Discussion
The operations of division and conformance defined earlier characterize all possible updates and coordinators for a composite system. In general, the set of quotients or coordinators is not a singleton, which then necessitates a choice function to pick a component that best suits the needs. We saw in Theorem 1 how such a choice function can be defined using an ordering on components, and choosing the least among such components. Intuitively, such a choice function prefers a quotient with the least number of observations. However, Theorem 1 assumes a fixed interface, and does not discuss how to rank components according to their interfaces. We discuss alternative rankings, below, in the case of the synchronous product ⊲⊳ of Example 4.

Cost of coordination
Let A = (E A , L A ) and B = (E B , L B ) be two components such that the set of quotients of A divisible by B under ⊲⊳, the synchronous product, is non empty. Consider, as well, a component C = (E C , L C ) in the same set of quotients. We discuss alternative scenarios based on the interface of component C.
Consider the case where E C ∩ E B = ∅. Then, events in the intersection E C ∩ E B are events for which C and B must perform a simultaneous observation, i.e., with equal time stamps. The size of the intersection E C ∩ E B can therefore characterize how much coordination must take place between two implementations of components B and C to successfully achieve those synchronous observations. Alternatively, if a component D = (E D , L D ) is in the set of quotients such that |E D ∩ E B | < |E C ∩ E B |, then the smaller number of shared events of D with B hints at a potentially smaller amount of coordination between components D and B. 3 In the case that E C ∩ E B = ∅ and E C = ∅, the family of quotients with interface E C are particularly useful. The fact that the two interfaces are disjoint means that A can be decomposed into two components that do not need any coordination. Indeed, as ⊲⊳ constrains only occurrences of shared events, if B and C share no events then they can be run completely independently of one another.
The two cases highlighted above give us some insight into how the intensity of coordination can be used as a measure to rank components. Note that such ranking is contextual to the dividend and the divisor. Although C may require more coordination than D to synchronize with B in order to form component A, in the context of A divisible by another component F , component C may become preferable to D.
The cost of coordination discussed here is orthogonal to efficiency measures discussed in Theorem 1. Thus, the two measures can be combined to first rank components in terms of their interfaces to minimize the amount of coordination required, and then rank components sharing the same interface in terms of the size of their behavior.

Series of division
We defined a division operator for components. We saw that, under some criteria, division may return a 'better' description of a composite system, i.e., for A = B × C, the division of A by B may return a D better than C while preserving the behavior of A.
The question of convergence, then, naturally follows. Consider the expression A = B × C, and the division of A by B returning a component C ′ = C. Symmetrically, the division of A by the new component C ′ may return a component B ′ = B. Repeating the same process, dividing A by B ′ and so on, produces a sequence of components C (n) and B (n) in their respective n th division. It is interesting and practically useful to investigate if the sequences C (n) and B (n) eventually converge to a fixed pair of components, and if so, under what conditions. mean smaller or equally intense coordination: it is the frequency of the occurrence of shared events that determines the overhead of required coordination. More generally, it is the frequency of occurrences of events (shared or otherwise) related by composability relations that impact the coordination overhead. The number of shared events may still be used as a crude approximation, or some more sophisticated, e.g., stochastic, model may be used instead, as a measure of the overhead of coordination.

Related work
Component-based design. In [9] the authors present a calculus of time sensitive components that implement a set of sequences of time sensitive tasks. The model focuses more particularly on the timing profile of a task, and captures, in a component interface, the arrival time and latency for each sequence. Our work can benefit from [9] by using their formalism to specify and implement components. Alternatively, our family of interaction products on components may give new tools for compositional specification of real-time scheduling.
In [6], the authors present an axiomatic specification of components as pairs of an interface signature and a set of sentences capturing some pre and post conditions of stateful components. The semantics of such an interface is the set of models that satisfy all axioms. The authors present an implementation of such interfaces as component bodies, and also provide a composition operation that composes required and provided interfaces of two components. The difference with our work is mainly on the nature of the algebra: we model interaction algebraically, while the authors model the interface, specification, and the body of a component algebraically. Both works can benefit from each other, as our work can be extended using an algebraic specification of components; and our interaction product can extend the algebra of components in [6].
(De)composition. In [7], the authors present a declarative and an operational theory of components, for which they define a refinement relation and monotonicity results for some composition operators. Our work is related as it aims for similar results, but for the case of Cyber-Physical systems. Thus, instead of having input and output actions, our components have timed observations, and composability relations. We present, as well, quotient operation on components, and show how it can be used to synthesize coordinating CPSs.
In [15], the authors consider the problem of decomposition of constraint automata. This work provides a semantic foundation to prove that the construction in [15] is a valid division.
Algebra, co-algebra. The algebra of components described in this paper is an extension of [12]. Algebra of communicating processes [8] (ACP) achieves similar objectives as decoupling processes from their interaction. For instance, the encapsulation operator in process algebra is a unary operator that restricts which action occurs, i.e., δ H (t s) prevents t and s to perform actions in H. Moreover, composition of actions is expressed using communication functions, i.e., γ(a, b) = c means that actions a and b, if performed together, form the new action c. Different types of coordination over communicating processes are studied in [5]. In [3], the authors present an extension of ACP to include time sensitive processes.
The modeling of a component's interaction using co-algebraic primitives is at the foundation of the Reo language [2]. In [4], the question of separation of components into two sub-components is addressed from a co-algebraic perspective.
Discrete Event Systems. Our work represents both cyber and physical aspects of systems with a unified model of discrete event systems. In [11], the author lists the current challenges in modelling cyber-physical systems in this way. The author points to the problem of modular control, where even though two modules run without problems in isolation, the same two modules may block when they are used in conjunction. In [16], the authors present procedures to synthesize supervisors that control a set of interacting processes and, in the case of failure, report a diagnosis. An application for large scale controller synthesis is given in [13].
Coordination. In [14], the author describes infinite behaviors of process and their synchronization. Notably, the problem of non-blockingness is considered: if two processes eventually interact on some actions, how to make sure that they will not block each other.

Conclusion
We approach the challenge of designing cyber-physical systems using algebraic methods. Components denote sequences of observations over time, and operations on components capture the interaction that arises from the behaviors of those components. We extend a family of algebraic products with a corresponding family of division operators.
We show how division can serve as a decomposition operator. Intuitively, a component is divisible by another component if there exists a third component that, in composition with the latter gives the former. Division also opens some new reasoning possibilities as to update a system by replacing a component with a 'better' component while preserving the overall behavior. We apply our framework to reason about updates in a cyber-physical system consisting of robots moving on a shared field.
We show that C(E) is an upper bound for the set of coordinators C(E). For any C ∈ C(E), we have which implies that C ≤ C(E) and makes C(E) an upper bound for C(E). Given associativity, commutativity, and idempotency of × (R,⊕) , for any C 1 , C 2 ∈ C(E): which, applied over the set C(E), gives B × (R,⊕) ( C(E)) ⊑ A. Thus, C(E) ∈ C(E).