Journal of Logical and Algebraic Methods in Programming

In this work, we incorporate reversibility into structured communication-based programming, to allow parties of a session to automatically undo, in a rollback fashion, the effect of previously executed interactions. This permits to take different computation paths along the same session, as well as to revert the whole session and start a new one. Our aim is to deﬁne a theoretical basis for examining the interplay in concurrent systems between reversible computation and session-based interaction. We thus propose ReS π a session-based variant of π -calculus using memory devices to keep track of the computation history of sessions in order to reverse it. We show how a session type discipline of π -calculus is extended to ReS π , and illustrate its practical advantages for static veriﬁcation of safe composition in communication-centric distributed software performing reversible computations. We also show how a fully reversible characterisation of the calculus extends to committable sessions, where computation can go forward and backward until the session is committed by means of a speciﬁc irreversible action. © 2015 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

between causal dependency and reversible structures is studied, with the novelty that in this setting signal identifiers are not necessary unique.
In our work we mainly take inspiration from the ρπ approach. Indeed, all other approaches based on CCS and DSD cannot be directly applied to a calculus with name-passing. Moreover, the ρπ approach is preferable to the Rπ one because the former proposes a reduction semantics, which we are interested in, while the latter proposes a labelled semantics, which would complicate our theoretical framework (in order to properly deal with scope extension of names). Specifically, we use unique (non-structured) tags for identifying threads and memories for recording taking place of actions, choices and forking. Each memory is devoted to storing the information needed to revert a single event, and memories are connected each other, in order to keep track of computation history, by using tags as links.
For what concerns the related works on session-based calculi, it is worth noticing that we consider a setting as standard and simple as possible, which is the one with synchronous binary sessions. In particular, our host calculus is the well-established variant of π -calculus introduced in [23], whose notation has been revised according to [24]. We leave for future investigation the application of our approach to formalisms relying on the other forms of sessions introduced in the literature, among which we would like to mention asynchronous binary sessions [25], multiparty asynchronous sessions [26], multiparty synchronous sessions [27], sessions with higher-order communication [24], sessions specifically devised for service-oriented computing [28][29][30].
Finally, the paper with the aim closest to ours is [31], where a formalism combining the notions of reversibility and session is proposed. This calculus is simpler than ReSπ , because it is an extension of the formalism of session behaviours [32] without delegation (i.e., it is a sub-language of CCS) with a checkpoint-based backtracking mechanism. In fact, neither message nor channel passing are considered in the host calculus. Concerning reversibility, only the behaviour prefixed by the lastly traversed checkpoint is recorded by a given party, that is each behaviour is simply paired with a one-size memory. Moreover, causal-consistency is not considered, because in this formalism parties just reduce in sequential way. Also committable sessions are not taken into account. On the other hand, this formalism enabled the study of an extension of the compliance notion to the reversible setting.

Session-based π -calculus
In this section we present the syntax and semantics definitions of the host language considered for our reversible calculus. This is a variant of π -calculus enriched with primitives for managing structured binary sessions.

Syntax
We use the following base sets: shared channels, used to initiate sessions; session channels, consisting on pairs of endpoints used by the two parties to exchange values within an established session; variables, used to store values; labels, used to select and offer branching choices; and process variables, used for recursion. The corresponding notation and terminology are as follows: realise the standard synchronous message passing, where messages result from the evaluation of expressions. Notably, an exchanged value can be an endpoint that is being used in a session (this channel-passing modality is called delegation), thus allowing complex nested structured communications. Constructs k l.P and k {l 1 : P 1 , . . . , l n : P n } denote label selection and label branching (where l 1 , . . . , l n are assumed to be pairwise distinct) via endpoints identified by k and k , respectively.
They mime method invocation in object-based programming.
The above interaction primitives are then combined by standard process calculi constructs: conditional choice, parallel composition, restriction, recursion and the empty process (denoting inaction). It is worth noticing that restriction can have both shared and session channels as argument: (νa)P states that a is a private shared channel of P ; similarly, (νs)P states that the two endpoints of the session channel, namely s and s, are invisible from processes different from P (see the seventh law in Fig. 2), i.e. no external process can perform a session action on either of these endpoints (this ensures non-interference within a session). As a matter of notation, we will write (νc 1 , . . . , c n )P in place of (νc 1 ) . . . (νc n )P .
We adopt the following conventions about the operators precedence: prefixing, restriction, and recursion bind more tightly than parallel composition.
Bindings are defined as follows: ū(x).P , u(x).P and k?(x).P bind variable x in P ; (νa) P binds shared channel a in P ; (νs) P binds session channel s in P ; finally, μX.P binds process variable X in P . The derived notions of bound and free names, alpha-equivalence ≡ α , and substitution are standard. For P a process, fv(P ) denotes the set of free variables, fc(P ) denotes the set of free shared channels, and fse(P ) the set of free session endpoints. For the sake of simplicity, we assume that free and bound variables are always chosen to be different, and that bound variables are pairwise distinct; the same applies to names. Of course, these conditions are not restrictive and can always be fulfilled by possibly using alpha-conversion.

Semantics
The operational semantics is given in terms of a structural congruence and of a reduction relation. Notably, the semantics is only defined for closed terms, i.e. terms without free variables. Indeed, we consider the binding of a variable as its declaration (and initialisation), therefore free occurrences of variables at the outset in a term must be prevented since they are similar to uses of variables before their declaration in programs (which are considered as programming errors).
The structural congruence, written ≡, is defined as the smallest congruence relation on processes that includes the equational laws shown in Fig. 2. These are the standard laws of π -calculus. Reading the laws in Fig. 2 by row from left to right, and from top to bottom row, the first three are the monoid laws for | (i.e., it is associative and commutative, and has 0 as identity element). The second four laws deal with restriction and enable garbage-collection of channels, scope extension and scope swap, respectively. The eighth law permits a recursion to be unfolded (notation P [Q /X] denotes replacement of free occurrences of X in P by process Q ). The last law equates alpha-equivalent processes, i.e. processes only differing in the identity of bound variables/channels.
To define the reduction relation, we use an auxiliary function · ↓ for evaluating closed expressions: e ↓ v says that expression e evaluates to value v (where v ↓ v, and x ↓ is undefined).
The reduction relation, written →, is the smallest relation on closed processes generated by the rules in Fig. 3. We comment on salient points. A new session is established when two parallel processes synchronise via a shared channel a; this results in the generation of a fresh (private) session channel whose endpoints are assigned to the two session parties (rule [Con] if e then P 1 else if e then P 1 else P 2 → P 2 e ↓ false [If2]

The multiple providers scenario in the session-based π -calculus
The scenario involving a client and multiple providers introduced in Section 1 can be rendered in π -calculus as follows (for the sake of simplicity, here we consider just two providers): where the client process P client is defined as if accept( y quote ) then x l acc . P acc else (if negotiate( y quote ) then x l neg . P neg else x l rej . 0) while a provider process P provider i is as follows a login (y). y?(z req ). y! quote i (z req ) . y {l acc : Q acc , l neg : Q neg , l rej : 0} We show below a possible evolution of the system, where the client contacts provider1 and accepts the proposed quote:

Reversible session-based π -calculus
In this section, we introduce ReSπ , a reversible extension of the calculus described in Section 3. A reversible calculus is typically obtained from the corresponding host calculus by adding memory devices (see Section 2). They aim at storing information about interactions and their effects on the system, which otherwise would be lost during forward computations, as e.g. the discarded branch in a conditional choice. In doing this, we follow the approach of [15], which in its turn is inspired by [5] (for the use of memories) and by [12] (for the use of thread identifiers). Of course, since here we consider as host calculus a session-based variant of standard π -calculus, rather than an asynchronous higher-order variant, the technical development is different.
Roughly, our approach to keep track of computation history is as follows: we tag processes with unique identifiers and use memories to store the information needed to reverse each single forward reduction. Thus, the history of a reduction sequence is stored in a number of small memories connected each other by using tags as links. In this way, ReSπ terms can perform, besides forward reductions (denoted by ), also backward reductions (denoted by ) that undo the effect of the former ones. As in the reversible calculi discussed in Section 2, forward computations are reverted in a causal-consistent fashion. That is, independent (more precisely, concurrent) actions can be undone in an order possibly different from the exact order of forward reductions in reverse. Specifically, an action can be undone only after all the actions causally depending on it have already been undone. We will come back on causal-consistency in Section 5.
Before introducing the technicalities of ReSπ , we informally provide a basic intuition about its main features. Let us come back to the scenario introduced in Section 1 and specified in π -calculus in Section 3.3. We can obtain a ReSπ specification of the scenario by simply annotating the π -calculus term with the (unique) tags t 1 , t 2 and t 3 as follows: Now, the computation described in Section 3.3 corresponds to a sequence of five forward reductions leading to the ReSπ process M having the following form: The forward computation has created a tuple t of fresh tags, which includes the tags t 1 and t 2 attached to the resulting processes of client and provider1, respectively. Moreover, each reduction has created a memory . . . , which is spawn in parallel with the two processes of the involved parties and devoted to store the information for reverting the corresponding forward reduction. Here, for the sake of presentation, we have omitted the content of such memories, except for the one generated by the first reduction: it records that the process tagged by t 1 (i.e., the client) initiates a session s along channel a login with the process tagged by t 2 (i.e., the first provider); it also records the variables replaced by the session endpoints and the continuation processes together with their tags. Notably, process M cannot immediately use this memory to revert the interaction corresponding to the session initiation. Indeed, a memory can trigger a backward reduction only if two processes properly tagged with the continuation tags are available, which is not the case of the first memory in the process M. Therefore, as expected, all the other forward reductions must be previously reverted in order to revert the session initiation one.

Syntax
The syntax of ReSπ is given in Fig. 4. In addition to the base sets used for π -calculus processes in Section 3, here we use tags, ranged over by t, t , . . . , to uniquely identify threads. Letters h, h , . . . denote names, i.e. (shared and session) channels and tags together. Uniqueness of tags is ensured by using the restriction operator and by only considering reachable processes (see Definition 3).
ReSπ processes are built upon standard (session-based) π -calculus processes by labelling them with tags. Thus, the syntax of π -calculus processes P , as well as of expressions e, is the same of that shown in Fig. 1 and, hence, it is omitted here. It is worth noticing that only ReSπ processes can execute (i.e., π -calculus ones cannot). ReSπ also extends π -calculus with memory processes m. In particular, there are three kind of memories: • Action memory t 1 −A → t 2 , t 1 , t 2 , storing an action event A together with the tag t 1 of the active party of the action, the tag t 2 of the corresponding passive party, and the tags t 1 and t 2 of the two new threads activated by the corresponding reduction. An action event A, as we shall clarify later, records all information necessary to revert the corresponding interaction, which can be either a session initiation a(x)( y)(νs)P Q , a communication along an established session k e (x)P Q , or a branch selection k l i P {l 1 : P 1 , . . . , l n : P n }. Notably, in the latter two events, k can only be either s ors (i.e., it cannot be a variable).
• Choice memory t, e? P :Q , t , storing a choice event e? P :Q together with the tag t of the conditional choice and the tag t of the new activated thread. The choice event e? P :Q records the evaluated expression e, and the processes P and Q of the then-branch and else-branch, respectively.
• Fork memory t ⇒ (t 1 , t 2 ) , storing the tag t of a splitting thread, i.e. a thread of the form t : (P | Q ), together with the tags t 1 and t 2 of the two new activated threads, i.e. t 1 : P and t 2 : Q . The use of fork memories is analogous to that of connectors in [18].
Threads and memories are composed by parallel composition and restriction operators. The latter, as well as the notion of bound and free identifiers, extend to names. In particular, for M a ReSπ process, ft(M) denotes the set of free tags; fv(·), fc(·) and fse(·) extend naturally to ReSπ processes. Of course, we still rely on the same assumptions on free and bound variables/channels mentioned in Section 3.1.
Not all processes allowed by the syntax in Fig. 4 are semantically meaningful. Indeed, in a general term of the calculus, the history stored in the memories may not be consistent, due to the use of non-unique tags or broken connections between continuation tags within memories and corresponding threads. For example, given the choice memory t, e? P :Q , t , we have a broken connection when no thread tagged by t exists in the ReSπ process and no memory of the form t −A → t 2 , t 1 , t 2 , t 1 −A → t , t 1 , t 2 , t , e? P 1 :P 2 , t 1 , and t ⇒ (t 1 , t 2 ) exist.
The class of meaningful ReSπ processes we are interested in consists of programs and runtime processes. The former are the terms that can be written by programmers, i.e. they are ReSπ processes with no memory. In fact, memories are not expected to occur in the source code written by programmers. We assume that the threads within a program have unique tags. The latter terms of the class are the ReSπ processes that can be obtained by means of forward reductions from programs; in this way, history consistency is ensured. Using the terminology from [20], the processes of the considered class are called reachable. We formalise their definition below.

Definition 1 (Programs).
The set of ReSπ programs is the set of terms generated by the following grammar M ::= t : P | (νc) M | M | N | nil and whose threads have distinct tags, where P is a π -calculus process as in Fig. 1.

Definition 2 (Runtime processes).
The set of ReSπ runtime processes is the set of terms obtained by the transitive closure under (see Section 4.2) of the set of ReSπ programs.

Definition 3 (Reachable processes).
The set of ReSπ reachable processes is the union of the sets of programs and runtime processes.
Notice that in Definition 1 the restriction operator is defined on channels c rather than on names h. This because there is no need to restrict tags in a program. In fact, it is sufficient to use distinct tags, as required by the definition. In practice, the programmer would have to write just a π -calculus term that can be then automatically annotated with unique tags to obtain a ReSπ program. At runtime, as shown in the next subsection, it is the operational semantics in charge of generating fresh tags for the new threads by means of the restriction operator.

Semantics
The operational semantics of ReSπ is given in terms of a structural congruence and of a reduction relation.
The structural congruence ≡ extends that of π -calculus ( Fig. 2) with the additional laws in Fig. 5. Most of the new laws simply deal with the parallel and restriction operators on ReSπ processes. Thus, we only focus on relevant laws (below, the laws are read by row from left to right, and from top to bottom row). The eighth law permits a restriction on a π -calculus process to be moved to the level of ReSπ processes. The ninth law lifts the congruence at π -calculus process level to the threads level. The last law is crucial for fork handling: it is used to split a single thread composed of two parallel processes into two threads with fresh tags; the tag of the original thread and the new tags are properly recorded in a fork memory.
The reduction relation of ReSπ , written , is given as the union of the forward and backward reduction relations defined by the rules in We comment on salient points. When two parallel threads synchronise to establish a new session (rule [fwCon]), two fresh tags are created to uniquely identify the two continuations of the synchronising threads. Moreover, all relevant information is stored in the action memory t 1 −a(x)(y)(νs)P 1 P 2 → t 2 , t 1 , t 2 : the tag t 1 of the initiator (i.e., the thread executing a prefix of the form ā(·)), the tag t 2 of the thread executing the dual action, the tags t 1 and t 2 of their continuations, the shared channel a used for the synchronisation, the replaced variables x and y, the generated session channel s, and the processes P 1 and P 2 to which substitutions are applied. All such information is exploited to revert this reduction (rule [bwCon]). In particular, the corresponding backward reduction is triggered by the coexistence of the memory described above with two threads tagged t 1 and t 2 , all of them within the scope of the session channel s and tags t 1 and t 2 generated by the forward reduction (which, in fact, are removed by the backward one). Notice that, when considering reachable processes, due to tag uniqueness, the two processes P and Q must coincide with P 1 [s/x] and P 2 [s/y]; indeed, as registered in the memory, these latter processes have been tagged with t 1 and t 2 by the forward reduction. The fact that two threads tagged with t 1 and t 2 are available in parallel to the memory ensures that all actions possibly executed by the two continuations activated by the forward computation have been undone and, hence, we can safely proceed to undone the forward computation itself.
Rules Notably, in the first two rules mentioned above, besides tags and continuation processes, the action memory stores the session endpoint k of the receiving party (the other endpoint k is obtained by duality), the expression e generating the sent value, and the replaced variable x.
It is also worth noticing that, since all information about a choice event is stored in the corresponding memory, we need just one backward rule ([bwIf]) to revert the effect of the forward rules [fwIf1] and [fwIf2]. The meaning of the remaining rules, dealing with parallel composition, restriction and structural congruent terms, is straightforward.

The multiple providers scenario in ReSπ
Let us consider again the multiple providers scenario. We have shown at the beginning of this section that a ReSπ specification can be obtained by simply annotating the π -calculus term with unique tags as follows: [bwCon] [ bwCom] [bwLab] Now, the computation described in Section 3.3 corresponds to the following sequence of forward reductions: Basically, five memories have been generated, each one dedicated to revert the effects of the corresponding forward reduction.
If a problem occurs during the subsequent interactions between the client and the provider for finalising the service agreement, the computation can be reverted to the initial state. In particular, the backward rules [bwCon], [bwCom], [bwLab] and [bwIf] can be applied only if the ReSπ term contains a memory in parallel with thread(s) appropriately tagged by the continuation tag(s) stored in the memory. For example, to apply the rule [bwCon] in the process M, two threads tagged by t 1 1 and t 1 2 must be in parallel with the first memory, which actually is not the case. In fact, in M, only the last memory can trigger a backward step, by means of the application of rule [bwLab]: In this way, the threads labelled by t 5 1 and t 4 2 are removed, while the threads performing the selection and offering the branching choice, labelled by t 4 1 and t 3 2 respectively, are restored. Then, in the process M , only the last memory can trigger a backward reduction, which undoes the conditional choice performed by the client thread. Similarly, other backward reductions can be subsequently triggered by the other memories, consuming them from the bottom to the top of the term. In this way, the forward computation can be completely reverted: Now, the client can start a new session, possibly with provider2.
Notice that in ReSπ there is no need of explicitly implementing loops for enabling the client to undo and restart sessions. Notice also that here we do not consider specific primitives and techniques that avoid interacting again with the same provider. This would break the Loop Lemma (see Lemma 5) and complicate our theoretical framework; we refer to [34] for a definition of some of such controlled forms of reversibility.

Properties of the reversible calculus
We present in this section some properties of ReSπ , which are typically enjoyed by reversible calculi. We exploit terminology, arguments and proof techniques of previous works on reversible calculi (in particular, [15,5,20]). As a matter of notation, we will use P and R to denote the set of π -calculus processes and of ReSπ processes, respectively.

Correspondence with π -calculus
We first show that ReSπ is a conservative extension of the (session-based) π -calculus. In fact, as most reversible calculi, ReSπ is only a decoration of its host calculus. Such decoration can be erased by means of the forgetful map φ, which maps ReSπ terms into π -calculus ones by simply removing memories, tag annotations and tag restrictions.

Definition 4 (Forgetful map).
The forgetful map φ : R → P, mapping a ReSπ process M into a π -calculus process P , is inductively defined on the structure of M as follows: To prove the correspondence between ReSπ and π -calculus, we need the following auxiliary lemma relating structural congruence of ReSπ to that of π -calculus.  The correspondence between ReSπ and π -calculus reductions is completed by the following lemmas, which intuitively are the inverse of Lemma 1 and Lemma 2. Proof. We proceed by induction on the derivation of P → Q (see Appendix A.1.1). 2

Loop lemma
The following lemma shows that, in ReSπ , backward reductions are the inverse of the forward ones and vice versa.

Causal consistency
We show here that reversibility in ReSπ is causally consistent. Informally, this means that an action can be reverted only after all the actions causally depending on it have already been reverted. In this way, in the presence of independent actions, backward computations are not required to necessarily follow the exact execution order of forward computations in reverse. We formalise below the notions of independent (i.e., concurrent) actions and of causal consistency.
As in [15] and [5] t 2 ) ). Moreover, since conflicts between transitions are identified by means of tag identifiers (see Definition 5 below), we only consider transitions that do not use α-conversion on tags, and that generates fork memories in a deterministic way, e.g. given a memory t ⇒ (t 1 , t 2 ) tags t 1 and t 2 are generated by applying an injective function to t.
The stamp λ(η) of a transition label η identifies the threads involved in the corresponding transition, and is defined as follows (we use T to denote a set of tags {t i } i∈I ): The stamp of fork memories permits us to take into account the relationships between a thread and its sub-threads. This is similar to the closure over tags used in [18]. Notably, as in [15], the tags of continuation processes are inserted into a stamp, in order to take into account possible conflicts between a forward transition and a backward one. Notice also that it is instead not necessary to include in the stamp the fresh session channel created or used by a reduction. In fact, this would allow to detect conflicts between the transitions involving the memory corresponding to the creation of the channel and the transitions where such channel is used. Such conflicts, however, are already implicitly considered, since after its creation the channel is only known by the threads corresponding to the continuation processes, which are already considered in the stamp as discussed above.
We can now define when two transitions are concurrent.

Definition 5 (Concurrent transitions). Two coinitial transitions
Two coinitial transitions are concurrent if they are not in conflict.
Intuitively, two transitions are concurrent when they do not involve a common thread.
The following lemma characterises the causally independence among concurrent transitions. In order to study the causality of ReSπ reversibility, we introduce the notion of causal equivalence [4,5] between traces, denoted . This is defined as the least equivalence relation between traces closed under composition that obeys the following rules: Intuitively, the first rule states that the execution order of two concurrent transitions can be swapped, while the other rules state that the composition of a trace with its inverse is equivalent to the empty transition. Now, we conclude with the result stating that two coinitial causally equivalent traces lead to the same final state. Thus, in such case, we can rollback to the initial state by reversing any of the two traces.

Type discipline
In this section, first we recall the session type discipline of session-based π -calculus then we discuss how we could exploit it to type ReSπ processes.

Typing session-based π -calculus
The type discipline presented here is basically the one proposed in [23], except for the notation of the calculus that has been revised according to [24].

Types
The syntax of sorts, ranged over by S, S , . . . , and types, ranged over by α, β, . . . , is defined in Fig. 7 α 1 , . . . , l n : α n ] describes a branching behaviour: it waits with n options, and behave as type α i if the i-th action is selected (external choice). Type ⊕[l 1 : α 1 , . . . , l n : α n ] represents the behaviour which would select one of l i and then behaves as α i , according to the selected l i (internal choice). Type end represents inaction, acting as the unit of sequential composition. Type μt.α denotes a recursive behaviour, representing the behaviour that starts by doing α and, when variable t is encountered, recurs to α again. As in [23], we take an equi-recursive view of types, not distinguishing between a type μt.α and its unfolding α[μt.α/t], and we are interested on contractive types only, i.e. for each of sub-expressions μt.μt 1 . . . μt n .α the body α is not t. The result is that, in a typing derivation, types μt.α and α[μt.α/t] can be used interchangeably.
For each type α, we define α, the dual type of α, by exchanging ! and ?, and & and ⊕. The inductive definition is in Fig. 8.

Typing system
A sorting (resp. a typing, resp. a basis) is a finite partial map from shared identifiers to sorts (resp. from session identifiers to types, resp. from process variables to typings). We let , , . . . (resp. , , . . . , resp. , , . . . ) range over sortings (resp. typings, resp. bases). We write · k : α when k / ∈ dom( ); this notation is then extended to · . Typing judgements are of the form ; P which stands for "under the environment ; , process P has typing ". The typing system is defined by the axioms and rules in Fig. 9. We call a typing completed when it contains only end types. A typing is called balanced if whenever s : α, s : β ∈ , then α = β. We refer the interested reader to [23] for detailed comments on the rules.

Results
We report here the main results concerning the type discipline, namely Subject Reduction and Type Safety, borrowed from [23]. The former result states that well-typedness is preserved along computations, while the latter states that no interaction errors occur on well-typed processes. The notion of error, necessary to formalise Type Safety, is also taken from [23]. A k-process is a process prefixed by subject k, while a k-redex is the parallel composition of two k-processes either of the form (k! e .P 1 | k?(x).P 2 ), or (k l i .P | k {l 1 : P 1 , . . . , l n : P n }). Then, P is an error if P ≡ (νc)(Q | R) where Q is, for some k, the parallel composition of either two k-processes that do not form a k-redex, or of three or more k-processes.

Theorem 3 (Type Safety). A program typable under a balanced channel environment never reduces to an error.
Proof. See proof of Theorem 3.4 in [23]. 2

Typing ReSπ
We show here how the notion of types, the typing system and the related results given for the π -calculus (Section 6.1) can be reused for typing ReSπ . The key point is that we only consider reachable ReSπ processes originated from ReSπ programs that are well-typed (according to the typing discipline of π -calculus). In fact, by statically type checking ReSπ programs, we already check all possible interactions that they will perform. More specifically, Subject Reduction and Type Safety ensure that all runtime processes obtained from a program by means of (forward) reductions are interaction safe. Thus, since backward computations cannot lead to new runtime processes, but just go back to terms reachable from the program via forward reductions, there is no need of type checking the content of the memories in runtime processes.

Typing the multiple providers scenario
Coming back to the scenario introduced in Section 3.3 and specified in ReSπ in Section 4.3, we can easily verify that the process is well-typed (assuming that the unspecified processes P acc , P neg , Q acc and Q neg are properly typable). In particular, the channel a login can be typed by the shared channel type where we use sorts Request and Quote to type requests and quotes, respectively.
Let us consider now a scenario where the client wishes to concurrently submit two different requests to the same provider, which would be able to concurrently serve them. Consider in particular the following specification of the client (the provider one is dual): The new specification of the scenario is clearly not well-typed, due to the use of parallel threads within the same session. This forbids us from mixing up messages related to different requests and wrongly delivering them. In order to properly concurrently submit separate requests, the client must instantiate separate sessions with the provider, one for each request.
The session type discipline, indeed, forces concurrent interactions to follow structured patterns that guarantee the correctness of communication. For what concerns reversibility, linear use of session channels limits the effect of causal consistency, since concurrent interactions along the same session are prevented and, hence, the backtracking of a given session follows a single path. Of course, interactions along different sessions are still concurrent and, therefore, it is important to use a causal-consistent rollback to revert them.

Committable sessions
The calculus ReSπ discussed so far is fully reversible, i.e. backward computations are always enabled. Full reversibility provides theoretical foundations for studying reversibility in session-based π -calculus, but it is not suitable for a practical use on structured communication-based programming. Therefore, in this section, we enrich the framework to allow computation to go backward and forward along a session, allowing the involved parties to try different interactions, until the session is successfully completed. This is achieved by adding a specific action to the calculus for irreversibly committing the closure of sessions.
It is worth noticing that the fully reversible characterisation of the calculus permits us to prove that its machinery for reversibility (i.e., memories and their usage) soundly works with respect to the expected properties of a reversible calculus. This remains valid also for the extension proposed here. In fact, as clarified below, the extended calculus basically prunes some computations allowed in ReSπ , which corresponds to backward and forward actions that are undesired after a session closure.

ReSπ with commit
The syntax of ReSπ C (Reversible Session-based π -calculus with Commit) is obtained from that of ReSπ (given in Fig. 4) by extending the syntactic category of processes P with process commit(k).P , and by extending the syntactic category of memories m with the commit memory t 1 − √ (s) → t 2 . This new memory simply registers the closing event of the session identified by s due to an agreement of threads tagged by t 1 and t 2 .
The irreversible closure of a session is achieved by the synchronisation on its session channel s of two threads of the form t 1 : commit(s).P 1 and t 2 : commit(s).P 2 . This synchronisation acts similarly to the 'cut' operator in Prolog, as both mechanisms are used to prevent unwanted backtracking. After the synchronisation, since the session s is closed, the continuations P 1 and P 2 can no longer use the session channel s; this check is statically enforced by the type system for ReSπ C (presented later on). Formally, the semantics of ReSπ C is obtained by adding the following rule to those defining the reduction relation of ReSπ (Fig. 6): Since commit is an irreversible action that will never be backtracked, there is no need to remember information about the continuation processes in the generated memory. For the same reason, there is no backward rule inverse to [commit].
For what concerns the type discipline, types α (defined in Fig. 7) are extended with type √ , while the typing system is extended with the following rule: which ensures that after the commit the session is closed.

Irreversibility propagation
When a commit action is executed, all actions that caused it became unbacktrackable, although they were themselves reversible. In other words, a commit action creates a domino effect that disables the possibility of reversing the session actions previously performed.
To formalise the domino effect caused by the commit irreversible action, we have to introduce the following notions of head and tail of memories: Using the terminology of [5], we say that a memory is locked when the event stored inside can never be reverted, because the conditions triggering the corresponding backward reduction will never be satisfied. Specifically, to perform a backward reduction is required the coexistence of a memory with threads properly tagged, and the latter will never be available due to an irreversible action. Let us now formalise, given a process M, its set L M of locked memories.

Definition 8 (Locked memories).
Let M be a ReSπ C process and M M stand for the set of memories occurring in M. The set L M of locked memories of M is defined as follows: The first point says that a commit memory is locked, while the second point describes the propagation of the locking effect, i.e. the event within m depends on the event within m (because the latter generates a thread involved in the former) and hence also m is locked. Of course, L M ⊆ M M . Now, we can demonstrate the main result about ReSπ C, stating that committed sessions cannot be reverted (Theorem 6). This result is based on the notion of reversible memory (Definition 9) and on Lemma 7, ensuring that locked memories cannot be reverted. We use + to denote the transitive closure of .

The multiple providers scenario in ReSπ C
Let us consider again the multiple providers scenario specified in ReSπ in Section 4.3. Suppose now that, for the sake of simplicity, the client and the first provider commit the session immediately after the acceptance of the quote. That is, P acc and Q acc stand for commit(x) and commit(y), respectively. Thus, the computation described in Section 3.3 in ReSπ C corresponds to: M can now evolve as follows: The memory t

Concluding remarks
To bring the benefits of reversible computing to structured communication-based programming, we have defined a theoretical framework based on π -calculus that can be used as formal basis for studying the interplay between (causalconsistent) reversibility and session-based structured interaction. We conclude the paper by discussing the main directions of ongoing and future work.
Concerning the reversible calculus, we plan to investigate the definition of a syntactic characterisation of consistent terms, which statically enforces history consistency in memories (as in [15]), rather than using the current definition of reachable process (as in [20,18]). In line with [17], we also plan to enrich the language with primitives and mechanisms to control reversibility.
For what concerns the typing discipline of ReSπ , we intend to investigate the definition of a typing system capable of type checking contents of memories, in order to identify interaction errors that could be caused by terms restored from memories. This permits us to extend the class of ReSπ programs to include also processes with memories, thus allowing programmers to write also this kind of reversible code. We think that the required checks would resemble the semantics of the roll primitive in [17], which can be then used as a source of inspiration.
Coming to the extension with committable sessions, it is worth noticing that action commit must be carefully used in case of subordinate sessions. For example, let us consider the typical three-party scenario where a Customer sends an order to a Seller that, in his own turn, contacts a Shipper for the delivery. We have that the session between the Seller and the Shipper is subordinated to the session between the Customer and the Seller. Now, when the main session is committed, also the subordinate session is involved in the commit. This is usually desirable, because the commit acts on the whole transaction and, hence, after the commit the interaction with the Shipper cannot be reverted. However, if the subordinate session is previously committed, the main session is affected, because interaction performed before the commit cannot be reverted. This latter situation is typically undesirable; therefore, as a best practice, commit should be not used by subordinate sessions. We plan to devise a static analysis technique supporting this disciplined use of commit in presence of subordinate sessions.
As longer-term goal, we intend to apply the proposed approach to other session-based formalisms, which consider, e.g., asynchronous sessions and multiparty sessions. We also plan to investigate implementation issues that may arise when incorporating the approach into standard programming languages, in particular in case of a distributed setting. The rollback mechanism incorporated in the semantics of the language would require low-level synchronisations between the involved parties.  • [fwCon]: We have that M = t 1 :ā(x).P 1 | t 2 : a( y).P 2 and N = (νs, t 1 , t 2 )(t 1 :  Proof. The proof is straightforward. Indeed, given a ReSπ process M such that φ(M) = P , it must have the form (νt)(t : P | i∈I m i ) up to ≡. Thus, the process N, such that φ(N) = Q , can be defined accordingly: N ≡ (νt)(t : Q | i∈I m i ). Now, we can conclude by exploiting the ninth law in Fig. 5, i.e. t : P ≡ t : Q if P ≡ Q , and the fact that relation ≡ on ReSπ processes is a congruence. 2 Proof. We proceed by induction on the derivation of P → Q . Base cases:
We conclude by applying φ to N, since we obtain φ(N) = Q . Inductive cases: • [Par]: We have that P = P 1 | P 2 and Q = P 1 | P 2 . Let M be a ReSπ process such that φ(M)   The proof for the only if part is by induction on the derivation of N M. Base cases: and M = t 1 :ā(x).P 1 | t 2 : a( y).P 2 . Since N is a reachable process, memory t 1 −a(x)(y)(νs)P 1 P 2 → t 2 , t 1 , t 2 has been generated by a synchronisation between threads t 1 :ā(x).P 1 and t 2 : a( y).P 2 , producing a session channel s and two continuation processes P  Since the two transitions are concurrent, the involved threads are four distinct threads (two sending threads and two receiving ones). In particular, we consider below two communications along different sessions; in fact, the type discipline in Section 6 forbids concurrent communications along the same session (although, in this proof, this kind of concurrent communications would not cause any problem). Thus, in the considered case, the source of the two transitions is as follows: where t 1 , t 2 , t 3 , t 4 are in t . Then, M where e 2 ↓ v 2 and m 2 = t 3 −s 2 e 2 (x 2 )P 3 P 4 → t 4 , t 3 , t 4 . Now, we have that M 1 N with As desired, we also have that M 2 N.
where m 2 = t 3 , e 2 ? P 3 :P 4 , t 3 , e ↓ true, and t 1 , t 2 , t 3 are in t . Notably, since the two transitions are concurrent, the continuation tag in m 2 can be neither t 1 nor t 2 (indeed, in the above process M this tag is t 3 ). Then, M M 1 with As desired, both M 1 and M 2 can then evolve with a backward and forward reduction, respectively, to N: The proof of the Causal Consistency theorem follows the (standard) pattern used in [15,5]. In particular, the proof relies on two auxiliary lemmas. The first lemma permits us to rearrange a trace as a composition of a backward trace and a forward one. The second lemma permits a forward trace to be shortened. Lemma 8. Let σ be a trace. There exist σ and σ both forward traces such that σ σ • ; σ .
Proof. We prove this by lexicographic induction on the length of σ , and the distance to the beginning of σ of the earliest pair of transitions in σ contradicting the property. If there is no such contradicting pair, then we are done. If there is one, say a pair of the form τ ; τ • with τ and τ forward transitions, we have two possibilities: either τ and τ are concurrent, or they are in conflict. In the first case, τ and τ • can be swapped by using Lemma 6, resulting in a later earliest contradicting pair. Then, the result follows by induction, since swapping transitions keeps the total length constant. In the second case, there is a conflict on a tag, because it belongs to the stamps of both transitions. Again, we have two sub-cases: either the memory involved in the two transitions is the same or not. In the first sub-case we have τ = τ , and then we can apply Lemma 5 to remove τ ; τ • . Hence, the total length of σ decreases and, again, by induction we obtain the thesis. Instead, the second sub-case never happens. Indeed, let τ generate a memory m 1 = t, e? P :Q , t (the case with the action memory is similar). A conflict with τ would be caused by the presence of t or t in the memory m 2 removed by τ • (and, by hypothesis, different from m 1 ). However, t cannot occur in m 2 , because the transition τ consumed the thread uniquely tagged by t, which then cannot be involved in the other transition. Also t cannot occur in m 2 , because the thread uniquely tagged by t has been generated by τ ; thus, another forward transition must take place before τ • to involve this thread so that t could occur in m 2 . 2 Lemma 9. Let σ 1 and σ 2 be coinitial and cofinal traces, with σ 2 forward. Then, there exists a forward trace σ 1 of length at most that of σ 1 such that σ 1 σ 1 .
Proof. The proof is by induction on the length of σ 1 . If σ 1 is a forward trace we are already done. Otherwise, by Lemma 8 we can write σ 1 as σ • ; σ (with σ and σ forward). Due to its form, σ 1 contains only one sequence of transitions with opposite direction, say τ • ; τ . Let m 1 be the memory removed by τ • . Then, in σ there is a forward transition generating m 1 ; otherwise there would be a difference with respect to σ 2 , since the latter is a forward trace. Let τ 1 be the earliest such transition in σ 1 . Since τ 1 is able to put back m 1 , it has to be the opposite of τ • , i.e. τ 1 = τ . Now, we can swap τ 1 with all the transitions between τ 1 and τ • , in order to obtain a trace in which τ 1 and τ • are adjacent. To do so, we use Lemma 6, since all the transitions in between are concurrent. Assume in fact that there is a transition involving memory m 2 which is not concurrent to τ 1 . A possible conflict could be caused by the presence of a continuation tag, say t, of m 1 in m 2 . But this case can never happen, since t is freshly generated by the forward rule used to produce τ 1 and thus, thanks to tag uniqueness, t cannot coincide with any tag of a previously executed transition. The other possible conflict could be caused by the presence of a continuation tag of m 2 in m 1 . Since τ • removes m 1 , this memory cannot contain a fresh tag generated by a subsequent transition when m 2 is created. Thus, also this case can never happen. Now, when τ • and τ are adjacent, we can remove both of them using . The resulting trace is shorter, thus the thesis follows by inductive hypothesis. 2 2. ( = ). By Property 1, we get that process M is reachable. Thus, by Definition 6, there exists a program N such that ; φ(N) with balanced, and N * M . Now, we can proceed as in case 1, by applying Lemma 2 and Proof. The proof proceeds by contradiction. Suppose that there exists a memory m such that m ∈ L M and m is reversible. By Definition 9, there exists M such that M + M and m / ∈ M M . We have two cases: 1. m = t 1 − √ (s) → t 2 : this case is trivial because no rule is able to revert this kind of memory (in fact, the forward rule [Commit] is not paired with a corresponding backward rule). Thus, no process M such that m / ∈ M M can be derived from M, which contradicts the hypothesis.
2. m = t 1 − √ (s) → t 2 : since m ∈ L M , by Definition 8, there exists a memory m ∈ L M and tag t such that t ∈ tail(m) and t ∈ head(m ). To revert m, according to rules [bwcon], [bwcom], [bwlab], and [bwif], for each tag t ∈ tail(m) a thread tagged by t must be in parallel with the memory. However, the tag t in tail(m) also belongs to head(m ), meaning that the thread tagged by t (which, we recall, is unique) has been already executed (in fact, data concerning such execution is stored in m ). Thus, no backward rule can be applied to revert m in one step. The only possibility is to revert m before. Now, if m is a commit memory, then we proceed as in case 1, i.e. m cannot be reverted and, hence, m is not reversible, which is a contradiction. Otherwise, we repeat the same reasoning of case 2 for m and proceed in this way until a commit memory is found. Indeed, this commit memory must exist by construction of L M (Definition 8, first point). As in case 1, this memory cannot be reverted and, hence, all involved memories, included m, are not reversible, which is a contradiction.