Reversible time travel with freedom of choice

General relativity allows for the existence of closed time-like curves, along which a material object could travel back in time and interact with its past self. This possibility raises the question whether certain initial conditions, or more generally local operations, lead to inconsistencies and should thus be forbidden. Here we consider the most general deterministic dynamics connecting classical degrees of freedom defined on a set of bounded space-time regions, requiring that it is compatible with arbitrary operations performed in the local regions. We find that any such dynamics can be realised through reversible interactions. We further find that consistency with local operations is compatible with non-trivial time travel: three parties can interact in such a way to be all both in the future and in the past of each other, while being free to perform arbitrary local operations.


Introduction
One of the most baffling aspects of general relativity is that certain solutions to the Einstein equations contain closed time-like curves (CTCs) [1][2][3][4][5][6][7], where an event can be both in its own future and past. Although it is not known whether CTCs are actually possible in our universe [8][9][10][11][12], their mere logical possibility poses the challenge to understand what type of dynamics could be expected in their presence.
The first systematic studies of the subject concentrated on space-time geometries where CTCs appear only in the future of some space-like surface [13][14][15] (figure 1(a)). This makes it possible to set initial conditions on that surface, i.e. in the pre-CTCs era, and to look for the corresponding solutions to the equations of motion. A prime case study is that of a billiard ball thrown in the direction of a wormhole: the initial position and velocity is chosen such that, if undisturbed, the ball comes out of the second mouth of the wormhole in the past and kicks its younger self off course, so the ball cannot reach the wormhole and kick itself. Since classical physics is clearly at odds with such 'inconsistent' dynamics, the corresponding initial conditions would have to be 'forbidden'. This, however, is itself at odds with the local nature of ordinary physical laws: What local mechanism prevents an experimenter from throwing the ball along the 'impossible' trajectory?
The surprising result is that (possibly multiple) self-consistent solutions were found for all cases studied. The ball does not enter the wormhole undisturbed: it is kicked softly, comes out the wormhole at a slightly different angle than expected and gives its younger self just the right soft kick. Even including friction, exploding bombs, and the like, solutions for any considered initial condition were found [17,18].
The existence of consistent solutions for every initial condition suggests a type of 'no new physics' principle [13]: the presence of CTCs should not modify the local laws of physics, nor the range of possible initial states. It is then meaningful to ask whether the validity of the principle can be extended to the region where CTCs are already present ( figure 1(b)). In such a case, there are no sufficiently regular space-like surfaces to set 'global' initial conditions. Furthermore, there exist classes of space-times, aimed at simulating time machines, where geodesics are 'reflected' by the CTC region, making it problematic to impose global initial conditions even in its past [19]. However, the local nature of physical laws would imply that global features, such as the presence of CTCs, should not constrain the possible actions of an agent 7 in any sufficiently localised region which itself does not contain CTCs.
Here we explore whether CTCs are compatible with such an extended requirement of 'no new physics'. Rather than considering a specific type of system (billiard balls, fields, etc), we work within an abstract framework describing classical, deterministic dynamics that does not assume any particular causal structure. It is formally a deterministic version of the formalism of 'classical correlations without causal order' [20], which in turn is the classical limit of the quantum 'process matrix' formalism [21] (see also [22][23][24][25][26][27][28][29][30][31][32]). This latter formalism [21] can be used to study correlations among parties where the causal order among them is not specified a priori. The (often implicit) assumption of a causal order, instead, is replaced by the assumption that the probabilities of measurement outcomes are well defined. It has been shown that this formalism is more general than quantum theory: correlations arise that cannot be explained with a predefined causal order of the parties. One can also consider the underlying state space to be classical (random variables) as opposed to quantum [20]. Strikingly, also in that special case such 'non-causal' correlations arise. Here, and as mentioned above, instead of dealing with quantum states or random variables, we restrict the model to deterministic dynamics; a limitation that is known to still allow for 'non-causal' correlations [20].
We prove that all classical, deterministic processes compatible with the free choice of local operations represent the evolution of classical systems via reversible dynamics in a suitable topology. Surprisingly, this is not true in the quantum case, where certain processes are incompatible with reversible dynamics [31]; it is neither true if the underlying states are random variables [33]. Yet, for deterministic dynamics, every process can be turned into a reversible one, as is shown later.
We further provide a complete characterisation of up to tripartite processes, including continuous-variable systems, identifying a broad class of processes that can only be realised in the presence of CTCs. Our results show that CTCs in general are logically compatible with classical, deterministic dynamics, where agents are free to perform any classical operation in local regions and, outside such regions, systems evolve according to classical, reversible dynamics.
Our approach departs from previous models of CTCs directly based on the quantum formalism [34][35][36][37][38][39][40][41][42][43][44][45][46]. In those models, if the underlying systems that travel on the CTCs are classical, then the 'no new physics' principle and the assumption of free choice cannot be uphold simultaneously. Rather, it was believed that one needs to invoke quantum mechanics in order to restore these desired features. In this article, we finally discuss the extension of the presented formalism to quantum CTCs and discuss it in the light of the above mentioned models. A detailed comparison among these models will be provided in a forthcoming paper [47].

Approach
It is customary to understand physical laws as rules for predicting future events based on initial conditions. This is often seen as an essential aspect for extracting testable predictions: one can always conceive, at least in principle, a controlled experiment where the initial conditions are set independently of any other relevant aspect of the experiment, and the final state is measured. CTCs undermine this view: in a general scenario, where events can be in their own  [8,9,16]. (a) Events with equal proper times along the world lines of the two mouths of the wormhole are identified. Accelerating the right mouth produces time dilation, resulting in CTCs in the future of the surface S. An experimenter acting in the past of S should be able to prepare arbitrary initial states on a space-like surface P . (b) An experimenter in a localised region L, which does not contain but is traversed by CTCs, should be able to perform arbitrary local operations.
future and global space-like surfaces are not available, it is unclear what independent variables an experimenter could try to control. Solutions to the dynamics have to be determined 'all at once', without a clear place for external interventions.
Our approach to a predictive framework for not globally hyperbolic space-times 8 , possibly containing CTCs, is to move to a more general notion of intervention, partly inspired by the methodology of classical causal models [49,50]. The key assumption is that physical laws retain their modular character, namely it is possible to separate the physical properties relevant for the dynamics of the system of interest from those responsible for the 'ntervention'. For example, when studying the trajectory of a billiard ball, one can typically ignore the particular mechanism setting the ball in motion; or when estimating the radiation produced by moving charges, one can abstract away the force causing the charge's movement. In other words, we wish to retain a notion of 'freedom of choice' in manipulating the relevant variables 9 .
To be more precise, we will identify interventions with localised operations in CTC-free regions of space-time. Following the above assumption, we assume that it is possible-at least in principle-to engineer any local operation on the system of interest without significantly affecting the dynamics outside the intervention regions. The role of the dynamics is then to predict how a system will respond to certain operations. In other words, given the specification of the operations in a set of space-time regions where intervention takes place, we should be able to calculate the state of the system observed in any other space-time region. We will call 'process' the function providing such a prediction.
Note that a 'process', in general, includes both information about the dynamics as well as additional boundary conditions not set by intervention. To clarify this point, consider an ordinary, CTC-free, scenario where an intervention sets the value of some variable (e.g. a field) on a small portion of a space-like surface and we want to estimate the resulting field somewhere in its causal future. In general, this requires fixing additional boundary conditions, for example on an extension of the initial region to a Cauchy surface. The 'process' is the function mapping the field values in the small region to the observed final field, once the remaining boundary conditions are fixed. In a limiting case, where no intervention is made, the 'process' is simply a specification of an admissible boundary condition in the region where an observation is made.
According to the above scheme, a particular dynamical law generates a class of processes, describing how interventions in arbitrary regions can influence observations in other regions, as well as all possible boundary conditions consistent with the laws. In the case of not globally hyperbolic space-times, boundary conditions might be subject to non-trivial constraints. Our working assumption is that such constraints should not affect the type of operations available in small regions that do not contain CTCs. This might at first seem at odds with 'grandfather paradox'-type arguments: denote by x the value a physical property takes on the future boundary of our region, a the value on the past, and let the 'CTC dynamics' be the identity, a = x. This is incompatible with any local operation x = f (a) other than identity.
The problem with the argument above is that it simply assumes that the identity backwards in time is a possible solution of the dynamics, based on the intuition that such evolution would be possible if a were in the future of x, without CTCs. The studies mentioned in the introduction suggest that such an assumption is typically incorrect: the system's evolution typically finds a way to 'adjust itself', preserving the consistency of 'free interventions'. The upshot is that we should not expect all conceivable functions to appear as possible solutions of the dynamics. Our approach is to take consistency with local operations as a starting point and explore the consequences: possible dynamics and their respective predictions are calculated based on the 'freedom of choice' of local operations. Whether this assumption is valid would depend on the particular geometry and dynamical equations. The main result is that it is logically possible, at least in principle, to have a predictably sound theory, compatible with local interventions, where the presence of CTCs can be made manifest through appropriate experiments.

The formalism
The core assumption of our model is that any classical operation that is possible in an ordinary space-time should also be possible in the presence of CTCs, as long as the operation takes place in a localised region of space-time that does not contain CTCs. This states that localised regions are ignorant of CTCs. Let us elaborate more on this core assumption.
We consider N non-overlapping space-time regions (henceforth local regions) which, individually, cannot be distinguished from regions in ordinary, globally hyperbolic space-time. These are the regions in which 'interventions' can be made; i.e. evolution inside the regions is assumed to be arbitrary, while outside the regions it is fixed by some given dynamics and boundary conditions. We impose no restriction on the space-time in which the regions are embedded, except that it is a Lorentzian manifold fixed independently of any dynamical degree of freedom of interest. Details regarding the causal structure of space-time will not play a prominent role in our analysis, but we point to [51] for a recent review.
To simplify the analysis, we restrict to compact, simply connected regions that have only space-like boundaries, which we decompose into a past boundary and a future boundary (these local regions are therefore space-time local), see figure 2(a). Furthermore, and for the same purpose, we assume that, for each local region, any time-like curve that enters through the past (future) boundary exits through the future (past) boundary, and that the region contains no CTCs. These assumptions ensure that we can treat the local regions as 'closed laboratories' [21]-where a system can only enter once through the past boundary and exit once through the future boundary, without exchanging information with the exterior in between. Under such conditions, the local operations can be simply identified with transformations from an input to an output space. The assumption of space-like boundaries has the further role of preventing, in a globally hyperbolic space-time, that a region intersects both the future and past light cone of another region. In other words, causal order among regions would form a partial-order relation. Therefore, any departure from partial order reveals a not globally hyperbolic space-time and some non-trivial form of time travel.
As we are interested in classical systems, we can assign classical state spaces I R (input) and O R (output) respectively to the past and future boundaries of a local region R. States will be denoted as i R ∈ I R , o R ∈ O R . For example, in a field theory, a state would be a function on the corresponding boundary surface and the state spaces would be appropriate spaces of functions.
A deterministic local operation in the local region is described by a function f R from input to output space (figure 2(a)). We denote by D R := { f R : I R → O R } the set of all possible operations in region R. We drop the index to refer to collections of objects for all regions, as 10 . Local operations are not required to be reversible, i.e. the local functions f R need not be invertible. This corresponds to the assumption that the local experimenters and devices have the ability to erase information by accessing some reservoir, not included in the description of the physical degrees of freedom of interest. Furthermore, input and output state spaces need not be isomorphic, as degrees of freedom may be added or removed during the operation. We will also consider the special case in which either input or output state space is the empty set. An 'output only' region-called a source-can be identified with a space-like region on which an agent (acting somewhere in its past) can prepare an arbitrary state (figure 2(b)), while an 'input only' region-called a sink-can be identified with a space-like region where an agent (somewhere in its future) can only observe the state (figure 2(c)). Ordinary dynamics is concerned with the evolution from a source (state preparation on a space-like surface) to a sink (state observation on a space-like surface).
We want to define a generalised type of dynamics for an arbitrary number of regions-in which arbitrary classical operations can be performed-possibly embedded in a space-time with CTCs. The basic requirement of such a model is that it must be able to predict the state observed on the past boundary of each local region 11 , which in general can depend on all local operations. (In a CTC-free space-time, the input state on a space-like region would only depend on operations in its past light-cone). For a deterministic model, such a dependence is encoded in a function ω ≡ {ω 1 , . . . , ω N } : D → I that determines the state on the past boundary of each region as a function of all local operations.
We define consistency with arbitrary choices of local operations in the following way: given a set of operations f ∈ D, let i R = ω R ( f ) be the input state of region R. This is transformed into the output o R = f R (i R ) by the local operation in that region. However, there are several different functions that yield the same output o R given the input i R . Since the experimenter is free to choose any operation, and it should not matter when that choice is made (in particular, it can be made after she already knows the input i R ), the external dynamics should not distinguish between all such operations. In particular, let us define the constant operation C o (i) = o yielding outcome o for any input i. Consistency with arbitrary local operations is then formulated as (1) In words, the input states i = ω( f ) produced by the process given local operations f should be the same as the states produced when all parties perform the constant operations that yield the same output states as the original operations f applied to ω( f ). We can simplify the representation of a process ω by noticing that it defines a unique , which we will call process function. This provides an alternative view for defining a process as a function that maps the states in the future boundary of the local regions to states in the past boundaries. This is indeed what we expect for local dynamical equations relating states at different points in space-time.
By writing ω( f ) = i, we see that condition (1) implies the following condition for w: In other words, if w is a process function, then w • f has a fixed point for every local operation f . Condition (2) is in fact both necessary and sufficient: a function w satisfying (2) uniquely defines a process. This is because of the uniqueness of the fixed points: We prove this theorem in the appendix. As opposed to the proof of the analogous theorem in the probabilistic version of the formalism [33], our proof also holds for continuous and not only discrete variables 12 .
Because of theorem 1, every function w : O → I that satisfies condition (2) defines a unique function ω : D → I, with ω( f ) equal to the unique fixed point of w • f . It is furthermore easy to see that condition (2) implies the consistency condition (1). Therefore, we can identify a process with its process function w. The interpretation is that dynamics in the presence of CTCs is described by a function that maps the states on the future boundaries of all regions to states on the past boundaries of each region. Condition (2) imposes that such a dynamics is compatible with arbitrary operations in each region; theorem 1 further guarantees that specifying the operations performed in each region is sufficient to predict a unique state on each of the past boundaries.

Reversibility
Reversible classical dynamics is associated with invertible functions, such that the role of 'preparation' and 'measurement' can be swapped. Not all process functions are invertible; for example, the process function for a single 'sink' region (with trivial output) reduces to the specification of a state on that region and it is clearly not invertible. However, such a process function can be extended to a reversible one by introducing a 'source' region (with trivial input), in the past of the sink, so that the state on the sink can now be calculated as a function of the state prepared by the source, and this function can be invertible.
Crucially, we can prove that every process function can be extended to an invertible one, as expressed by the following theorem, proved in the appendix. This theorem shows that all process functions can be interpreted in terms of reversible dynamics: the source describes a space-like region 'in the past' of all other regions, while the sink is a space-like region 'in the future'. The process determines the state of the sink as well as the states on the past boundaries of all regions as a function of the states on the future boundaries of all regions and of the source. Because it is reversible, the process can be read in the opposite direction: given the states on the sink and on the past boundaries, it allows calculating the state on all future boundaries and of the source. The time-reversed process is then compatible with arbitrary reversed local operations that map local outputs to local inputs. In [47], it is further proven that the presence of a source and a sink is necessary in order to define a reversible process.

Characterization of process functions
The causal relations encoded by process functions can be better understood in terms of signalling. In general, given a function h : A × B → C, we say there is no signalling We say there is signalling if the opposite is true, i. e. h(a, b) = h(ã, b) for some a, ã, b. The terminology applies to process functions in the obvious way: a region S does not signal to a region R if the Rth component of the process function does not depend on the output of S.
Simple examples of process functions are causally ordered ones, namely those compatible with CTC-free dynamics, for which signalling is only possible in one direction 13 . For example, for regions R, S, T, . . . a process function w where the state on each past boundary is independent of future states 14 . It is easy to see that condition (2) is satisfied in such cases, i.e. a fixed point exists for every choice of local operations (it is given by i R =ī R , i S = w S • f R (ī R ), and so on). The interesting question is whether more general processes are possible, once CTCs are allowed. To answer this question, we will first give a complete characterisation of all process functions for up to three regions. The detailed proofs can be found in the appendix.
For a single local region, a process function has to be a constant: w(o) =ī ∀o. Thus, an observer acting in a localised region cannot send information back to herself; her observations are fully compatible with her region being embedded in a CTC-free space-time. A direct consequence is that, for an arbitrary number of regions, the input of each region R cannot depend on that region's output: w R (o) = w R (o \R ), where o \R is the set of outputs of all regions except R.
Bipartite process functions are characterized by the following conditions: In other words, deterministic process functions can only allow one-way signaling. Again, two observers in distinct localised regions would not be able to verify the presence of CTCs outside their regions. (Remarkably, this is not true for the quantum version of the framework [21].) Consider now three regions R, S, T. For simplicity, we denote input and output variables as a ∈ A, b ∈ B, c ∈ C and x ∈ X, y ∈ Y, z ∈ Z, respectively. A process function has then three component functions: a = w R (y ,z), b = w S (x,z), c = w T (x,y ) (where we used the fact that 13 Formally, a set of regions is causally ordered if their causal relations in space-time form a partial order: any region R is either in the causal past, causal future or space-like from any other region S. A process function is compatible with such a structure if signalling is only possible from a region to its causal future. We call such a process function causally ordered. 14 For the sake of presentation, equations like w S (o R , o S ) = w S (o R ) express the independence of the function value the input of each region cannot depend on its own output, as seen above). We give a simple characterization of process functions as functions where the output variable of one region 'switches' the direction of causal influence between the two other parties.

process function if and only if each of the three 'reduced functions' defined as
is a bipartite process function for every z ∈ Z, x ∈ X, y ∈ Y respectively.
Recall that a bipartite process function is at most one-way signaling. Theorem 3 thus says that w is a tripartite process function if and only if, for every fixed value for the outcome of one of the regions, only one-way signaling is possible between the other two. It is an open question whether a similar condition characterises arbitrary multipartite processes.

Examples
Given the above characterisation, it is simple to find process functions that cannot arise in ordinary, causal space time: it is sufficient that each party can signal non-trivially to the other two, while satisfying the condition of theorem 3. We present a continuous-variable example, based on a similar process for 'bits', first found by Araújo and Feix and published in [20]. Consider a tripartite scenario as above, where x, y, z, a, b, c ∈ R. We define w : where Θ(t) = 1 for t > 0, Θ(t) = 0 for t 0. In this process, the sign of the output of each region determines the direction of signaling between the other two. For example, for y 0 we have a = Θ(z) (T can signal to R) but c = 0 (R cannot signal to T), while for y > 0 the opposite direction of signaling holds (and similarly for the other pairs of regions). By theorem 2, we can extend w to a reversible process function w . To this end, we introduce source and sink spaces, both isomorphic to R 3 , with variables e 0 , e 1 , e 2 and s 0 , s 1 , s 2 , respectively. The extended process function w : R 6 → R 6 is given by This process allows three observers in regions R, S, T to perform arbitrary deterministic operations on the system they receive from the respective past boundary, sending the result out the respective future boundary. The outgoing systems then enter the CTC region and undergo some reversible transformation, interacting with each other and with the output of the source , eventually determining the state in the past of each region and of the sink ( figure 3). Crucially, the input state of each region depends non-trivially on the output state of the other two, thus each observer can communicate to every other. Thus, we have a situation where three observers can experimentally verify to be each both in the past and in the future of each other, they can perform arbitrary local operations, and no contradiction ever emerges.

Quantum CTCs
The above framework of classical, reversible dynamics can be extended to quantum systems. It is then interesting to compare the resulting model with existing quantum models for CTCs. We briefly present here the main results, and refer to [47] for a detailed analysis.
A classical system can be 'quantised' by associating to each state a distinct orthogonal state in a Hilbert space, with quantum superpositions represented by linear combinations. Thus, in the quantum version of the formalism, the boundary of each region is associated with a Hilbert space. A classical, reversible process defines a permutation of basis elements and can be extended by linearity to the entire Hilbert space, defining a unitary map from the future to the past boundaries. It is not a priori guaranteed that such a unitary defines a valid quant um process: observers in the local regions should now be able to perform arbitrary quantum operations. The resulting constraints on quantum processes can be conveniently formalised using the process matrix formalism of [21]. Using the characterisation of tripartite quantum processes of [23], it is proven in [47] that the quantisation of a finite-dimensional version of equation (3) indeed defines a valid unitary quantum process.
The two most studied models of quantum systems in the presence of CTCs are the so-called post-selected CTC model (P-CTC) [35-37, 39, 40, 42] and the Deutsch model (D-CTC) [34,38,[43][44][45]. Both models assume that CTCs are only present in a limited portion of spacetime. At some time before the CTCs, a chronology-respecting (CR) system is prepared. Then, the CR system interacts with a chronology-violating (CV) one, which travels along a CTC. The models prescribe how to calculate the state of the CR system obtained after interaction with the CV one. Within such frameworks, we can model the multi-region scenarios considered here by introducing a CR and a CV systems per region, and interpreting the interaction between each pair as our local operation in the corresponding local region. The CV systems then interact according to the unitary process and are later sent back in time, with the backward Figure 3. The output of three local regions fall into a CTC where they undergo a joint interaction with the state prepared by the source. The CTC outputs the input states to the three local regions and the sink. evolution described according to the specific model. We can then compare the evolution of the CV system predicted by each model.
As it turns out, the P-CTC model gives the same predictions as ours for any valid unitary process. The crucial difference is that the P-CTC model allows the CV system to evolve according to arbitrary unitaries, generically resulting in a non-linear evolution for the CR system and in a restriction on the local operations that can be performed. By contrast, our model imposes additional constraints, which effectively enforce the CR system to evolve linearly. The D-CTC model, on the other hand, allows arbitrary operations to be performed locally. However, it predicts non-linear evolution of the CR system, even when the CV system evolves according to a process subject to the constraints introduced here [52].

Conclusions
We developed a framework for deterministic dynamics in the presence of CTCs. The framework extends the ordinary concept of time evolution-where a future state is calculated as a function of a past one-to the more general scenario of a number of space-time regions, where the state in the past of each region is calculated as a function of the state in the future of all regions. Requiring that arbitrary operations must be possible in each region imposes strong constraints on the allowed dynamics. Our main result is that it is possible to have reversible dynamics, compatible with arbitrary local operations, where the state observed in each region depends non-trivially on the states prepared in all other regions. Because such a functional relation is reversible, it can always be realised by some physical system subject to local dynamical laws, e.g. in terms of a system of bouncing billiard balls [53].
The main message of our result is that CTCs are not necessarily in conflict with local physics, nor with the 'freedom of choice' associated with the possibility of performing arbitrary operations. The latter furthermore implies that the choice of the operations together with the CTC uniquely determines the states on the past boundaries. Importantly, and contrarily to several previous approaches [34][35][36][37][38][39][40][42][43][44][45][46], quantum mechanics need not be invoked to 'solve' paradoxes of classical time travelling. A quantum version of the framework can be developed within the so-called 'process matrix' formalism [21], where it would be natural (in analogy to classical determinism and reversibility) to impose unitarity of the process [31]. The precise connection between classical and quantum frameworks remains however an open questionwe provided a brief discussion nevertheless-for example, it is unclear weather all classical, deterministic processes can be 'quantised' to give a valid unitary quantum process. the authors and do not necessarily reflect the views of the John Templeton Foundation. We acknowledge the traditional owners of the land on which the University of Queensland is situated, the Turrbal and Jagera people.

Appendix A. Properties of process functions
Here we derive a set of properties of process functions, which will be needed in later proofs. For convenience, we will use the term process function to denote any function w that satisfies condition (2) in the main text, namely that a fixed point of w • f exists for each f ∈ D, even though the equivalence between this condition and the main-text definition of process function, equation (1), is due to theorem 1, which is proved in the next section.
Let us first fix some notation. As in the main text, an object without index refers to a collection of objects: I = I 1 × · · · × I N , etc. We will also use the notation The first property we need is a necessary condition for process functions:

Proof. For some set of local operations
and some fixed ī \R , let us define h R : We define then the function g R : It is then easy to see that h R • g R has no fixed point.
has no fixed point and w R is not a component of a process function. As this must hold for every set of local operations f and every ī \R , we conclude that each component w R of a process function must be a constant over O R . Intuitively, if we fix the operation performed in one of the N regions, we should obtain a process for the remaining N − 1 regions. This intuition will also play an important role in the proofs below. The first step to formalise such an intuition is the following definition: We will need the following fact: We can now prove two crucial properties.
To prove (ii), we can set R = 1 without loss of generality. We then have to prove that, if w f1 is a process function for every f 1 , it follows that w is also a process function, i.e. we have to find a fixed point of w • f for an arbitrary f . As the reduced function w f1 is a process function by assumption, there exists a fixed point i \1 of w f1 • f \1 . Choosing i 1 = w 1 • f \1 i \1 as input state for region 1, we see that i := i 1 ∪ i \1 is a fixed point of w • f . Indeed this is true, by definition of i 1 , for the component w 1 . For S > 1, the definition of reduced function, equation (A.1), gives where in the last equality we used the fact that i \1 is the fixed point of w f1 S • f \1 . □

Appendix B. Uniqueness of the fixed point
Here we prove theorem 1, namely the following N-dependent proposition. We prove this by induction, namely we first prove P [1] and then the implication P[N − 1] ⇒ P [N] for N > 1.

Appendix C. Reversibility
Here we prove that every process function can be extended to an invertible process function (theorem 2).
Proof. Given a process function w : O → I over N local regions, we add two extra regions, and , a source and a sink, respectively. We take the output space of the source to be isomorphic with the entire input space of the N regions, e ∈ O ∼ = I , while the input space of the sink is isomorphic to the output space of the regions, s ∈ I ∼ = O. For each R = 1, . . . , N and each e R ∈ I R , we introduce a function T eR R : I R → I R such that there exists ẽ R for which Tẽ R R (i R ) = i R and, for each i R ∈ I R , T (·) R (i R ) is invertible. For a state space with a linear structure, we can take T eR R (i R ) = i R + e R for concreteness (although the proof does not rely on this) 15  The function w is invertible, with the inverse given by i R = w R (s) − e R , o R = s R . To show that it is a process function, we have to prove that its composition with arbitrary local operations has a fixed point, condition (2) in the main text. Note that this condition is equivalent to the existence of output fixed points: f • w (o ) = o , o ≡ (o, e). Since local operations for are functions ∅ → O , where ∅ is the empty set, they can be identified with a state f (∅) ≡ e ∈ O , interpreted as 'state preparation'. The fixed-point condition for the source components is then trivially satisfied by any e ∈ O . As the sink has no output space, the fixed-point condition reduces to which should be satisfied for every f ∈ D and e ∈ O . This is true because f R • T eR R is a local operation and, as w is a process function, a fixed point o ∈ O exists for every local operation. □ 15 If |I R | = c R < ∞, we can use T eR R = i R ⊕ e R , where ⊕ is addition modulo c R .
non-null intersection.) In a similar way, we write X = X S ∪ X T and Y = Y R ∪ Y T , where w x S is constant for x ∈ X S , and so on. Thus we have w R (y R , z) = w R (y, z R ) = a 0 ∀ y R ∈ Y R , y ∈ Y, z R ∈ Z R , z ∈ Z; Now consider an arbitrary local operation f R : A → X . We want to show that the reduced function w fR , defined as in equation (A.1), is a bipartite process function. To this end, we need to prove: (i) w fR S (y, z) = w fR S (z) , (ii) w fR T (y, z) = w fR T (y) , (iii) At least one of the two components is constant.
Let us start with point (i). For z S ∈ Z S , we have w fR S (y, z S ) = b 0 , independently of y . Let us then consider z R ∈ Z R . By definition, w fR S (y, z R ) = w S ( f R • w R (y, z R ), z R ). But w R (y, z R ) = a 0 for z R ∈ Z R , thus w fR S (y, z R ) = w S ( f R (a 0 ), z R ), which is again independent of y . By a similar argument, w fR T (y, z) is independent of z. We are thus left with proving point (iii). We shall prove the equivalent implications w fR S not constant ⇒ w fR T constant, w fR T not constant ⇒ w fR S constant.
Say that w fR S is not constant. Then, there is a z R ∈ Z R such that w fR S (z R ) = b 0 . As we have seen above, for z R ∈ Z R , w fR S (z R ) = w S ( f R (a 0 ), z R ), so we need f R (a 0 ) / ∈ X S to have w fR S (z R ) = b 0 . This means that f R (a 0 ) ∈ X T . But then, for y R ∈ Y R we have w fR T (y R ) = w T ( f R (a 0 ), y R ) = c 0 , and also w fR T (y T ) = c 0 for y T ∈ Y T . This means that w fR T is constant. To recapitulate, we have proven that, if w x , w y , w z are bipartite process functions for arbitrary x ∈ X, y ∈ Y, z ∈ Z (as per hypothesis), then w fR is a bipartite process function for an arbitrary operation f R . Point (ii) of lemma A.3 finally implies that w is a process function, concluding the proof. □