A Strong Bisimulation for a Classical Term Calculus

When translating a term calculus into a graphical formalism many inessential details are abstracted away. In the case of $\lambda$-calculus translated to proof-nets, these inessential details are captured by a notion of equivalence on $\lambda$-terms known as $\simeq_\sigma$-equivalence, in both the intuitionistic (due to Regnier) and classical (due to Laurent) cases. The purpose of this paper is to uncover a strong bisimulation behind $\simeq_\sigma$-equivalence, as formulated by Laurent for Parigot's $\lambda\mu$-calculus. This is achieved by introducing a relation $\simeq$, defined over a revised presentation of $\lambda\mu$-calculus we dub $\Lambda M$. More precisely, we first identify the reasons behind Laurent's $\simeq_\sigma$-equivalence on $\lambda\mu$-terms failing to be a strong bisimulation. Inspired by Laurent's \emph{Polarized Proof-Nets}, this leads us to distinguish multiplicative and exponential reduction steps on terms. Second, we enrich the syntax of $\lambda\mu$ to allow us to track the exponential operations. These technical ingredients pave the way towards a strong bisimulation for the classical case. We introduce a calculus $\Lambda M$ and a relation $\simeq$ that we show to be a strong bisimulation with respect to reduction in $\Lambda M$, ie. two $\simeq$-equivalent terms have the exact same reduction semantics, a result which fails for Regnier's $\simeq_\sigma$-equivalence in $\lambda$-calculus as well as for Laurent's $\simeq_\sigma$-equivalence in $\lambda\mu$. Although $\simeq$ is formulated over an enriched syntax and hence is not strictly included in Laurent's $\simeq_\sigma$, we show how it can be seen as a restriction of it.


Introduction
An important topic in the study of programming language theories is unveiling structural similarities between expressions. They are widely known as structural equivalences; equivalent expressions behaving exactly in the same way. Process calculi are a rich source of examples. In CCS, expressions stand for processes in a concurrent system. For example, P Q denotes the parallel composition of processes P and Q. Structural equivalence includes equations such as the one stating that P Q and Q P are equivalent. This minor reshuffling of subexpressions has little impact on the behavior of the overall expression: structural equivalence is a strong bisimulation for process reduction.
This paper is concerned with such notions of reshuffling of expressions in λ-calculi with control operators. The induced notion of structural equivalence, in the sequel ≃, should identify terms having exactly the same reduction semantics too. Stated equivalently, ≃ should be a strong bisimulation with respect to reduction in these calculi. This means that ≃ should be symmetric and moreover o ≃ p and o o ′ should imply the existence of p ′ such that p p ′ and o ′ ≃ p ′ , where denotes some given notion of reduction for control operators. Graphically, It is worth mentioning that we are not after a general theory of program equivalence. On the one hand, not all terms having the same reduction semantics are identified, only those resulting from reshuffling in the sense made precise below. On the other hand, there are terms that do not have the same reduction semantics but would still be considered to "behave in the same way" (e.g. (1.2) below). In particular, our proposed notion of equivalence is not a bisimilarity: there are terms that have the same reduction behavior but are not related by our ≃-equivalence. Before addressing λ-calculi with control operators, we comment on the state of affairs in the λ-calculus. Formulating structural equivalences for the λ-calculus is hindered by the sequential (left-to-right) orientation in which expressions are written. Consider for example the terms (λx.(λy.t) u) v and (λx.λy.t) v u. They seem to have the same redexes, only permuted, similar to the situation captured by the above mentioned CCS equation. A closer look, however, reveals that this is not entirely correct. The former has two redexes (one indicated below by underlining and another by overlining) and the latter has only one (underlined): (λx.(λy.t) u) v and (λx.(λy.t)) v u (1. 2) The overlined redex on the left-hand side is not visible on the right-hand side; it will only reappear, as a newly created redex, once the underlined redex is computed. Despite the fact that the syntax gets in the way, Regnier [Reg94] proved that these terms behave in essentially the same way. More precisely, he introduced a structural equivalence for λ-terms, known as σ-equivalence and proved that σ-equivalent terms have head, leftmost, perpetual and, more generally, maximal reductions of the same length. However, the mismatch between the terms in (1.2) is unsatisfying since there clearly seems to be an underlying strong bisimulation, which is not showing itself due to a notational shortcoming. It turns out that through the graphical intuition provided by linear logic proof-nets (PN), one can define an enriched λ-calculus with explicit substitutions (ES) that unveils a strong bisimulation for the intuitionistic case [ABKL14]. In this paper, we resort to this same intuition to explore whether it is possible to uncover a strong bisimulation behind the notion of σ-equivalence formulated by Laurent [Lau02,Lau03] in the setting of classical logic. Thus, we will not only capture structural equivalence on pure functions, but also on programs with control operators. We next briefly revisit proof-nets and discuss how they help unveil structural equivalence as strong bisimulation for λ-calculi. An explanation of the challenges that we face in addressing the classical case will follow.

Proof-nets.
A proof-net is a graph-like structure whose nodes denote logical inferences and whose edges or wires denote the formula they operate on. Proof-nets were introduced in the setting of linear logic [Gir87], a logic that provides a mechanism to explicitly control the use of resources by restricting the application of the structural rules of weakening and contraction. Proof-nets are equipped with an operational semantics specified by graph transformation rules which captures cut elimination in sequent calculus. The resulting cut elimination rules on proof-nets are split into two different kinds: multiplicative, that essentially reconfigure wires, and exponential, which are the only ones that are able to erase or duplicate (sub)proof-nets. Most notably, proof-nets abstract away the order in which certain rules occur in a sequent derivation. As an example, assume three derivations of the judgements ⊢ Γ; A, ⊢ ∆; A ⊥ ; B and ⊢ Π; B ⊥ , resp. The order in which these derivations are composed via cuts into a single derivation is abstracted away in the resulting proof-net: In other words, different terms/derivations are represented by the same proof-net. Hidden structural similarity between terms can thus be studied by translating them to proofnets. Moreover, following the Curry-Howard isomorphism which relates computation and logic, this correspondence between a term language and a graphical formalism can also be extended to their reduction behavior [Acc18,Kes22]. In this paper we focus on defining one such structural equality that is a strong bisimulation for a classical lambda calculus based on Parigot's λµ-calculus [Par92, Par93]. Although we rely on intuitions provided by Laurent's Polarized Proof Nets [Lau02, Lau03], knowledge about Polarized Proof Nets is not required to read this work and is not further discussed. We begin with an overview of a similar program carried out in the intuitionistic case.
Intuitionistic σ-Equivalence. Regnier introduced a notion of σ-equivalence on λ-terms (written ≃ σ and depicted in Fig. 1), and proved that σ-equivalent terms behave in essentially identical way. This equivalence relation involves permuting certain redexes, and was unveiled through the study of proof-nets. In particular, following Girard's encoding of intuitionistic logic into linear logic [Gir87], σ-equivalent terms are mapped to the same proof-net (modulo multiplicative cuts and structural equivalence of PN).
(λx.λy.t) u ≃ σ 1 λy. (λx.t)  The reason why Regnier's result is not immediate is that redexes present on one side of an equation may disappear on the other side of it, as illustrated with the terms in (1.2). One might rephrase this observation by stating that ≃ σ is not a strong bisimulation over the set of λ-terms. If it were, then establishing that σ-equivalent terms behave essentially in the same way would be trivial.
Adopting a more refined view of λ-calculus as suggested by linear logic, which splits cut elimination on logical derivations into multiplicative and exponential steps, yields a decomposition of β-reduction on terms into multiplicative/exponential steps. The theory of explicit substitutions (a survey can be found in [Kes09]) provides a convenient syntax to reflect this splitting at the term level. Indeed, β-reduction can be decomposed into two steps, namely B (for Beta), and S (for Substitution): or, more generally, the reduction at a distance version for B, introduced in [AK10], and written dB: Firing the dB-rule creates a new explicit substitution operator, written [x \ u], so that dB essentially reconfigures symbols (it is in some sense an innocuous or plain rule), and indeed reads as a multiplicative cut in proof-nets. The S-rule executes the substitution by performing a replacement of all free occurrences of x in t with u, written t{x \ u}, so that it is S that performs interesting or meaningful computation in the sense that it performs exponential cut steps in proof-nets. We write → S for → S steps inside an arbitrary context and similarly for → dB . Decomposition of β-reduction by means of the reduction rules in (1.4) prompts one to replace Regnier's ≃ σ (Fig. 1) with a new relation [AK12b] that we write here ≃σ (Fig. 2). The latter is formed essentially by taking the dB-normal form of each side of the ≃ σ equations. Also included in ≃σ is a third equation ≃σ 3 allowing commutation of orthogonal (independent) substitutions. Notice however that the dB-expansion of ≃σ 3 results in σequivalent terms, since t[y \ v][x \ u] ≃σ t[x \ u] [y \ v], with x / ∈ v and y / ∈ u, dB-expands to (λy.(λx.t) u) v ≃ σ (λx.(λy.t) v) u, both of which are σ-equivalent by ≃ σ 1 and ≃ σ 2 . Through ≃σ it is possible to unveil a strong bisimulation for the intuitionistic case by working on λ-terms with ES and the notion of β-reduction at a distance. Indeed, the following holds: Theorem 1.1 (Strong Bisimulation for the Intuitionistic Case I). Let t ≃σ t ′ . If t → dB,S u, then there exists u ′ such that t ′ → dB,S u ′ and t ′ ≃σ u ′ . Graphically, While any two ≃σ-equivalent λ-terms with ES translate to the same proof-net, the converse is not true. For example, the terms (λx.t v) u and (λx.t) u v, where x / ∈ v, translate to the same proof-net [Reg94] (indeed, they are σ-equivalent), however they are not ≃σ-equivalent. Still, as remarked in [AK12b], the dB-normal forms of those terms, namely (t v) [x \ u] and t[x \ u] v, are ≃σ-equivalent, thus suggesting an alternative equivalence relation that we define only on dB-normal forms. Indeed, plain forms are λ-terms with ES that are in dB-normal form (which are intuitively, those terms that are in multiplicative normal form from the proof-net point of view). Similarly, plain computation is given by dB-reduction; plain computation to normal form produces plain forms. As mentioned above our notion of meaningful computation is taken to be the reduction relation → S . We write ≃ for this refined equivalence notion on plain forms, which will in fact be included in the strong bisimulation that we propose in this paper (cf. Def. 5.5). The following variation of the theorem mentioned above is obtained: Theorem 1.2 (Strong Bisimulation for the Intuitionistic Case II). Let be → S followed by → dB -reduction to dB-normal form. Let t, t ′ be two terms in dB-normal form such that Thus, a strong bisimulation can be defined on a set of plain forms (here λ-terms with ES which are in dB-normal form), with respect to a meaningful computation relation (here → S ) followed by dB-reduction to dB-normal form.
Summing up, a strong bisimulation was obtained by decomposing β-reduction as follows: This methodology consisting in identifying an appropriate notion of plain form and meaningful computation, both over terms, allows for a strong bisimulation to surface. This requires establishing a corresponding distinction between multiplicative and exponential steps in the underlying term semantics. We propose following this same methodology for the classical case. However, as we will see, the notion of plain computation as well as that of meaningful one are not so easy to construct for Parigot's λµ-calculus. We next briefly introduce this calculus as well as the notion of σ-equivalence as presented by Laurent.
Classical σ-Equivalence. λ-calculi with control operators include operations to manipulate the context in which a program is executed. We focus here on Parigot's λµ-calculus, which extends the λ-calculus with two new operations: [α] t (command) and µα.c (µabstraction). The former may informally be understood as "call continuation α with t as argument" and the latter as "record the current continuation as α and continue as c". Reduction in λµ consists of the β-rule together with:  Fig. 3 in Sec. 3). Here is an example of terms related by this extension, where the redexes are underlined/overlined and ≃ σ denotes Laurent's aforementioned relation: Once again, the fact that a harmless permutation of redexes has taken place is not obvious. The term on the right has two redexes (µ and β) but the one on the left only has one (β) redex. Another, more subtle, example of terms related by Laurent's extension clearly suggests that operational indistinguishability cannot rely on relating arbitrary µ-redexes; the underlined µ-redex on the left does not appear at all on the right: Clearly, Laurent's σ-equivalence on λµ-terms fails to be a strong bisimulation.
Towards a Strong Bisimulation for λµ. We seek to formulate a similar notion of equivalence for calculi with control operators in the sense that it is concerned with harmless permutation of redexes possibly involving control operators and induces a strong bisimulation. A first step towards our goal involves decomposing the µ-rule as was done for the β-rule in (1.4): where c{ {α \ α ′ u} } denotes the fresh replacement changing all subexpressions of the form [α] t in c to [α ′ ] (tu). A brief discussion on our choice of notation for replacement, may be found at the end of this section. We still need to add the notion of distance to this operational semantics, as done for substitution. This produces a rule dM (for Mu at a distance), to introduce an explicit replacement, and another rule R (for Replacement), that executes explicit replacements: where c{ {α \ α ′ u} } replaces each sub-expression of the form [α] t in c by [α ′ ] (tu). Following our analogy with the intuitionistic case, our plain rule is dM and meaningful computation is performed by R.
Therefore, we tentatively fix our notion of meaningful computation to be S ∪ R over the set of plain forms, the latter now obtained by taking both dB and dM-normal forms. However, in contrast to the intuitionistic case where the decomposition of β into a multiplicative rule dB and an exponential rule S suffices for unveiling the strong bisimulation behind Regnier's σ-equivalence in λ-calculus, it turns out that splitting the µ-rule into dM and R is not enough in the classical case. We face two obstacles: Decomposing Meaningful Steps: Consider (1.5) from above. The methodology that led [AK12b] to obtain the theory of Fig. 2 In its full generality, the R rule in (1.7) can certainly duplicate or erase u. However, it may also be the case that there is a unique occurrence of α in c. If, furthermore, this occurrence cannot be duplicated or erased any time later, then this instance of R may quite reasonably be catalogued as non-meaningful. Indeed, these linear replacements will form part of our notion of plain computation rather than that of the meaningful one. With this revised notion of plain computation, the plain normal form of (µα. An additional µ-step will be needed on the left (the underlined one), which is not present on the right, to be able to obtain a term equivalent to µα ′ .
[γ] x on the right. Hence, this does not constitute a strong bisimulation diagram. However, completely dropping ≃ ρ is not possible since it is required to be able to swap renamings. For example, the following identity can be deduced using ≃ ρ twice. This swapping identity (and two others, see Sec. 6 for details) are necessary to be able to close other strong bisimulation diagrams. Example 5.10 in Sec. 5 illustrates this point. As it turns out, if one drops ≃ ρ but retains such swapping equations, just "enough of ≃ ρ " is preserved to obtain our strong bisimulation result.

Contributions.
Our contributions may be summarised as follows: (1) A refinement of λµ, called ΛM -calculus, including explicit substitutions for variables, and explicit replacement for names. The ΛM -calculus is proved to be confluent (Thm. 4.3); (2) A notion of structural equivalence ≃ for ΛM that is a strong bisimulation with respect to meaningful computation on the set of plain normal forms. (Thm. 5.9).
(3) A precise correspondence result between our bisimulation ≃ on ΛM -objects and Laurent's original σ-equivalence on λµ-objects. This paper is an extended and revised version of [KBV20].
Structure of the paper. After some preliminaries on notation introduced in Sec 2, Sec. 3 and Sec. 4 present λµ and ΛM , respectively. Sec. 5 discusses the difficulties in formulating a strong bisimulation and presents the proposed solution. Sec. 6 addresses the correspondence proof between our bisimulation ≃ on ΛM -objects and Laurent's original σ-equivalence on λµ-objects. Finally, Sec. 7 concludes and describes related work. Most proofs are relegated to the Appendix.

Some Basic Preliminary Notions
We start this section by some generic notations. Let R be any reduction relation on a set of elements O. We write ։ R for the reflexive-transitive closure of → R . If S is another reduction relation on O, we use → R,S to denote → R ∪ → S . We say there is an R-reduction We occasionally refer to R-reduction as R-computation.
We use NF R to denote all the normal forms of R, i.e. the set of all the elements in O which are in R-normal form.
Given t ∈ O, we say that u is an R-normal form of t if t ։ R u and u is in R-normal form. We denote by nf R (t) the set of all R-normal forms of t; this set is always a singleton when R is confluent and terminating.

The λµ-Calculus
In this section we introduce the untyped λµ-calculus.
3.1. The Untyped λµ-Calculus. Given a countably infinite set of variables V (x, y, . . .) and names N (α, β, . . .), the set of objects O λµ , terms T λµ , commands C λµ and contexts of the λµ-calculus are defined by means of the following grammar: The grammar extends the terms of the λ-calculus with two new constructors: commands [α] t and µ-abstractions µα.c. The combination of a command and a µ-abstraction will be coined explicit renaming, as in [α] µβ.c. The term (. . . ((t u 1 ) u 2 ) . . .) u n abbreviates as t u 1 u 2 . . . u n or t u when n is clear from the context. Regarding contexts, there are two holes and of sort term (t) and command (c) respectively. We write O o to denote the replacement of the hole (resp. ) by a term (resp. by a command). We often decorate contexts or functions over expressions with one of the sorts t and c to be more clear. For example, O t is a context O with a hole of sort term. The subscript is omitted if it is clear from the context. Free and bound variables of objects are defined as expected, in particular fv(µα.c) fv(c) and fv([α] t) fv(t). Free and bound names are defined as follows: We use fv x (o) and fn α (o) to denote the number of free occurrences of the variable x and the name α in the object o respectively. Additionally, we write . This notion is naturally extended to contexts.
We work with the standard notion of α-conversion, i.e. renaming of bound variables and names, thus for example [ [β] λy.y) z. In particular, when using two different symbols to denote bound variables or names, we assume that they are different without explicitly mentioning it.
Application of the implicit substitution {x \ u} to the object o, written o{x \ u}, may require α-conversion in order to avoid capture of free variables/names, and it is defined as expected.
Application of the implicit replacement { {α\ α ′ u} } to an object o, written o{ {α \ α ′ u} }, passes the term u as an argument to any sub-command of o of the form [α] t and changes the name of α to α ′ . This operation is also defined modulo α-conversion in order to avoid the capture of free variables/names. Formally: Parigot's original formulation of λµ-calculus [Par92, Par93] uses a binary replacement operation c{ {α \ u} } rather than the ternary one we introduced above. Details on our choice of notation, which are related to explicit replacements, are developed in Sec. 4.2.
The one-step reduction relation → λµ is given by the closure by all contexts O t of the following rewriting rules β and µ, i.e. → λµ O t → β ∪ → µ : Given X ∈ {β, µ}, we define an X-redex to be a term having the form of the left-hand side of the rule → X . A similar notion will be used for all the rewriting rules used in this paper. It is worth noticing that Parigot's [Par92] µ-rule of the λµ-calculus relies on a binary implicit replacement operation { {α \u} } assigning [α] (t{ {α \ u} }) u to each sub-expression of the form [α] t (thus not changing the name of the command). We remark that µα.c{ We adopt here the ternary presentation [KV19] of the implicit replacement operator, because it naturally extends to that of the ΛM -calculus in Sec 4.
3.2. The notion of σ-equivalence for λµ-terms. As in λ-calculus, structural equivalence for the λµ-calculus captures inessential permutation of redexes, but this time also involving the control constructs.  The first two equations are exactly those of Regnier (hence ≃ σ on λµ-terms strictly extends ≃ σ on λ-terms); the remaining ones involve µ-abstractions. It is worth noticing that our equations ≃ σ 7 and ≃ σ 8 are called, respectively, ≃ θ and ≃ ρ in [Lau03].
Laurent proved properties for ≃ σ on λµ-terms similar to those of Regnier for ≃ σ on λ-terms. More precisely, u ≃ σ v implies that u is normalisable (resp. is head normalisable, strongly normalisable) iff v is normalisable (resp. is head normalisable, strongly normalisable) [Lau03, Prop. 35]. Based on Girard's encoding of classical into intuitionistic logic [Gir91], he also proved that the translation of the left and right-hand sides of the equations of ≃ σ , in a typed setting, yield structurally equivalent (polarised) proof-nets [Lau03, Thm. 41]. These results are non-trivial because the left and right-hand side of the equations in Fig. 3 do not have the same β and µ redexes. For example, (µα.[α] x) y and x y are related by equation σ 7 , however the former has a µ-redex (more precisely it has a linear µ-redex) and the latter has none. Indeed, ≃ σ is not a strong bisimulation with respect to λµ-reduction, as mentioned in the introduction (cf. the terms in (1.5)): The above diagram shows, moreover, that an analogue of Thm. 1.1 does not hold for λµ.
There are other examples illustrating that ≃ σ is not a strong bisimulation (cf. Sec. 5). It seems natural to wonder whether, just like in the intuitionistic case, a more refined notion of λµ-reduction could change this state of affairs; a challenge we take up in this paper.

The ΛM -calculus
As a first step towards the definition of an adequate strongly bisimilar structural equivalence for the λµ-calculus, we extend its syntax and operational semantics to a term calculus with explicit operators for substitution and replacement.
4.1. Terms for ΛM . We consider again a countably infinite set of variables V (x, y, . . .) and names N (α, β, . . .). The set of objects O ΛM , terms T ΛM , commands C ΛM , stacks and contexts of the ΛM -calculus are given by the following grammar: Terms are those of the λµ-calculus enriched with explicit substitutions (ES) of the form [x \ u]. The subterm u in a term of the form t u (resp. the ES t[x \ u]) is called the argument of the application (resp. substitution). Commands are enriched with explicit replacements of the form α\ α ′ s (where the stack s is to be considered as list of arguments, as e.g. in [Her94]). Notice that stacks inside explicit replacements are required to be nonempty.
Stacks can be concatenated as expected (denoted s · s ′ by abuse of notation): if s = t 0 · . . . · t n , then s · s ′ t 0 · . . . · t n · s ′ ; where _ · _ is right associative. Given a term u, we use the abbreviation u :: s for the term resulting from the application of u to all the terms of the stack s, i.e. if s = t 0 · . . . · t n , then u :: s u t 0 . . . t n . Recall that application is left associative, so that this operation also is; hence u :: s :: s ′ means (u :: s) :: s ′ . The use of stacks in the new calculus is motivated with the forthcoming example just after the definition of the implicit replacement in Sec. 4.3.
Free and bound variables of ΛM -objects are defined as expected, having the new explicit operators binding symbols: i.e. fv(t[x \ u]) (fv(t) \ {x}) ∪ fv(u) and bv(t) = bv(t) ∪ bv(u) ∪ {x}. Concerning free and bound names of ΛM -objects, we remark in particular that the occurrences of α ′ in the explicit replacements c α \ α ′ s are not bound: We work, as usual, modulo α-conversion so that bound variables and names can be renamed.
The notions of free and bound variables and names are extended to contexts by defining fv( ) = fv( ) = fn( ) = fn( ) = ∅. Then e.g. x is bound in λx. , (λx.x) , and α is bound in α\ α ′ s . Bound names whose scope includes a hole or cannot be α-renamed.
This alternative has the advantage of being relatively simple. However, it is not without its subtleties. Most notable is determining the status of names. Consider an expression such as µα.c α \ u α \ v , resulting from reducing (µα.c) u v. One might understand that names are bound by multiple binders. Occurrences of α in c would thus be bound by three operators: the outermost µα and the two explicit replacements α \ u and α \ v . Alternatively, the occurrences of α in α \u and α \v could be understood as free. In this case, the outermost µα binds all free occurrences of α in c and the two occurrences of α in α \ u and α \ v . Beyond settling for one of these two approaches, there is the additional issue that firing α \ v actually doesn't affect α in c at all, for otherwise the ordering of u and v should be confused. The notion of scope is lost, and hence the ordering between α \u and α \v . Presentation (1.6) is somewhat heavier but crisper in terms of meaning. The previously mentioned term would be recast as Here α in c is bound to just one operator, namely c α \ α ′ u . Moreover, the dependency between u and v is now readily apparent: α ′ is bound by α ′ \ α ′′ v and α ′′ is bound by µα ′′ . In particular, µα ′′ does not bind any name in c. Presentation (1.7) which we recall below and is the one used in this paper: has an additional benefit that we shall not get to exploit here but that may be done so in a continuation of this work (which originally sparked it, in fact). It has to do with single replacement, where one would like to perform replacement of one occurrence of [α] t at a time. which does not make any sense. Indeed, the first argument named α in t 1 has already been replaced while the second one is still waiting for an argument, a fact not reflected in the syntax. It is exactly in this framework that the ternary notion of replacement inherited from Andou [And03] makes sense. Our example now reads 4.3. Substitution, Renaming and Replacement in ΛM . The application of the implicit substitution {x \ u} to an ΛM -object o is defined as the natural extension of that of the λµ-calculus (Sec. 3.1). We now detail the applications of implicit replacements and renamings, which are more subtle. The application of the implicit replacement { {α \ α ′ s} } to an ΛM -object, is defined as the following capture-avoiding operation (recall that by α-conversion α / ∈ fn(s) and α = α ′ ): Most of the cases in the definition above are straightforward, we only comment on the interesting ones. When the implicit replacement affects an explicit replacement, i.e. in the case (c γ \ α s ′ ){ {α \ α ′ s} }, the explicit replacement is blocking the implicit replacement operation over γ. This means that γ and α denote the same command, but the arguments of α must not be passed to γ yet. This is why the resulting explicit replacement will accumulate all these arguments in a stack, which explains the need for this data structure inside explicit replacements. Examples of these operations are ( The application of the implicit renaming {α \ β} to an ΛM -object is defined as: The three operations {x\u}, { {α\ α ′ s} } and {α\β} are extended to contexts as expected. The table below summarises the notions of implicit and explicit operations introduced above: Substitute a name for a name 4.4. Reduction Semantics of ΛM . The reduction semantics for ΛM will be presented in terms of reduction rules. With an eye placed on the upcoming notion of structural equivalence ≃, we will classify these rules as performing mere reshuffling of symbols or performing more elaborate work. This classification is motivated by the multiplicative and exponential nature of the different redexes of terms of ΛM as discussed in the introduction. The first two reduction rules for ΛM arise from the simple decomposition of β-reduction: where → dB is constrained by the condition fc(u, L). These have already been studied in the literature, where, as discussed in the introduction, they suffice to be able to state and prove a strong bisimulation result for the intuitionistic case (cf. Thm. 1.2). As mentioned there, the first is a simple reshuffling of symbols, which we thus consider to be plain computation, but the second is not. It may substitute u deep within a term or possibly even erase u, involving the use of exponential cuts in its proof-net semantics and hence considered meaningful computation. Note that dB operates at a distance [AK12a], in the sense that an abstraction and its argument may be separated by an arbitrary number of explicit substitutions. The next reduction rules we consider arise from a more subtle decomposition of µreduction. The third reduction rule for ΛM is: subject to the constraint fc(u, L), and α ′ is a fresh name (i.e. α ′ = α, α ′ / ∈ c and α ′ / ∈ u). This rule is similar in nature to dB in the sense that it fires a µ-redex and may be seen to reshuffle symbols. In particular no replacement actually takes place since it introduces an explicit replacement. Rule dM is therefore also considered to perform plain computation. Note that dM operates at a distance too. With the introduction of dM our notation for explicit replacement can now be justified. Indeed, following Parigot [Par92], one might be tempted to rephrase the reduct of dM with a binary constructor, writing L µα.c α \ u on the right-hand side of the rule dM. This would be incorrect since all free occurrences of α in c are bound by the α in c α \ u which renders the role of "µα" meaningless.
We have not yet finished introducing the reduction rules for ΛM . All that is missing is a means to execute the explicit replacement introduced by dM. The natural candidate for executing replacement would be to have just one reduction rule, namely: However, this is too coarse grained to be able to obtain our strong bisimulation result (cf. Sec. 5) and therefore explicit replacement will be implemented not by one, but rather by multiple reduction rules. In particular, these resulting reduction rules can be easily categorised into plain and meaningful behavior, according to the following criterion: first, whether the bound name α in the command c occurring in the left-hand side of rule R is linear; and, second, if this single occurrence of α in c may be erased or duplicated by reduction or not. We will next present these rules gradually.
The first of these rules is the case where α does not occur linearly in c and results in the fourth reduction rule for ΛM : We still have to address the case where there is a unique occurrence of α in c. Replacing that unique occurrence is not necessarily an act of mere reshuffling; it depends on where the occurrence of α appears in c. If α appears inside the argument of an application, an explicit substitution or an explicit replacement, then this single occurrence of α could be further erased or duplicated. One might say replacing s performs hereditarily meaningful work. Let us make this more precise. If α occurs exactly once in c, then the left-hand side c α \ α ′ s of the reduction rule R must have one of the following forms: where the unique free occurrence of α in c has been highlighted in bold and α does not occur in any of C, t, c ′ , s ′ . These determine the following two instances of the reduction The first rule applies the explicit replacement when finding the (only) occurrence of the name α; while the second rule composes the explicit replacements by concatenating their respective stacks. As mentioned above, we would like to further identify the case when α appears inside the argument of an application, an explicit substitution or an explicit replacement in each of these reduction rules. The ones where it does are called non-linear and the ones where it does not are called linear. This results in four (disjoint) rules. From now on, we call N and Nnl the linear and non-linear instances of rule Name respectively; similarly we have C and Cnl for Comp: These rules can be formulated with the notion of linear contexts, generated by the following grammars: where each category LXY represents the linear context which takes an object of sort Y and returns another of sort X. For example, LTC is a linear context taking a command and generating a term. Indeed, notice that the grammar does not allow the hole to occur inside a parameter (of an application or an ES). With this definition in place we can, for example, formulate the decomposition of the reduction rule Name as follows: The following diagram summarises which instances of R are considered plain and which are considered meaningful: Summarizing our analysis, the full set the reduction rules for ΛM is presented next.
Definition 4.1. Reduction in the ΛM -calculus is given by the following reduction rules closed under arbitrary contexts: We can then state that the ΛM -calculus refines the λµ-calculus by decomposing β and µ in more atomic steps.
As in the case of the λµ-calculus, the ΛM -calculus is confluent too. Proof. The proof uses the interpretation method [CHL96], by projecting the ΛM -calculus into the λµ-calculus. Details in the Appendix A.

A Strong Bisimulation for ΛM
We now introduce our notion of structural equivalence for ΛM , written ≃, breaking down the presentation into two key tools on which we have based our development: plain forms and linear contexts.
Plain Forms. As discussed in Sec. 1, the initial intuition in defining a strong bisimulation for ΛM arises from the intuitionistic case: Regnier's equivalence σ is not a strong bisimulation, but decomposing the β-rule and taking the dB-normal form of the left and right hand sides of the equations in Fig. 1, results in theσ-equivalence relation terms on λ-terms with explicit substitutions. This relation turns out to be a strong bisimulation with respect to the notion of meaningful computation (the relation → S in the case of λ-calculus).
One can identify dB as performing innocuous or plain computation, a fact that can also be supported by how this step translates as a multiplicative cut in polarized proofnets [Lau02, Lau03]. Similarly, one can identify S as performing non-trivial or meaningful work. In the classical case, this leads us to introduce two restrictions of reduction in ΛM (cf. Def. 4.1), one called plain and one called meaningful.
Definition 5.1. The plain reduction relation → P is defined as the closure by contexts of the following plain rules: The set of plain forms of the ΛM -calculus is given by NF P NF dB,dM,N,C . Moreover, the relation → P is terminating and confluent.
Theorem 5.2. The relation → P is terminating.
Proof. We prove this result by resorting to a polynomial interpretation. Details in Appendix B.
Theorem 5.3. The relation → P has the diamond property and hence it is confluent.
Proof. We prove that the relation → P has the diamond property by inspecting all possible cases. Details can be found in Appendix B.
From now on, we will refer to the (unique) plain form of an object o as P(o) nf P (o).
Definition 5.4. The meaningful replacement reduction relation → R • is defined as the closure by contexts of the non-linear rules Rnl, Nnl, and Cnl, i.e.
The meaningful reduction relation for the ΛM -calculus on plain forms is given by: We occasionally use S and R • to make explicit which rule is used in a -step.
In the classical case, a first attempt to obtaining a strong bisimulation is to consider the rules that result from taking the P-normal form at each side of those from Laurent's σ-equivalence relation (cf. Fig. 3). The resulting relation ≃σ on ΛM -objects is depicted in Fig. 4. This equational theory would be a natural candidate for our strong bisimulation, but unfortunately it is not the case. As discussed in the introduction, the rule ≃σ 8 breaks strong bisimulation, so we are thus forced to remove it. But we cannot remove it completely, as it is required for firing linear redexes [Lau03, Prop. 40]. It is also required for swapping explicit renamings in order to close strong bisimulation diagrams (cf. Thm. 5.9), this aspect of ρ is incorporated as the rule ≃ exren in our upcoming σ equivalence.
Linear Contexts. Linear contexts turn out to be very useful when decomposing the rewriting rule R (cf. Sec. 4.4). Here they are used once again to reduce the amount of necessary rules for the equivalence relation. Note that linear contexts are not only used to support commutation of explicit substitution but also for explicit replacement. Indeed, the equations ≃σ 1 , ≃σ 2 , ≃σ 3 and ≃σ 9 in Fig. 4 are generalised into a single equation reflecting the fact that an explicit substitution commutes with linear contexts. Something similar can be stated for rules ≃σ 4 and ≃σ 5 , between linear contexts and explicit replacement. Moreover, linear contexts can be skipped by any explicit operator (substitution or replacement) as long as they are independent, i.e. no undesired capture of free variables/names takes place.
Definition 5.5 (Structural Equivalence over Plain Forms). The structural equivalence relation ≃ is the least congruence relation over plain forms of the ΛM -calculus generated by the rules in Fig. 5. Proof. Each case is proved by induction on L or R respectively, using some auxiliary results. Details in the Appendix C.
Note that ≃ is not a congruence on arbitrary terms but on plain forms. For example, µα. [α] x and x are both in plain form and moreover µα. [α] x ≃ x. However, (µα.
This case is particularly interesting since rules ≃ exrepl and ≃ exren play a key role in it: As in the previous case, we can conclude thanks to rules ≃ exrepl and ≃ exren : Hence, commutation rules ≃σ 1 , ≃σ 2 , ≃σ 3 and ≃σ 9 from Fig. 4 are replaced by ≃ exsubs , while ≃ exrepl and ≃ exren replace both ≃σ 4 and ≃σ 5 , and ≃σ 8 is discarded. Only rules ≃σ 6 and ≃σ 7 remain unaltered, here called ≃ ppop (for pop/pop) and ≃ θ respectively (see [Lau03] for the origin of these names). The following table summarises the correspondence between the different rules:
where u, u ′ are terms and s, s ′ are stacks. Then, Proof. Each case is proved by induction on o ≃ o ′ . Details in the Appendix D.
We are now able to state the promised result, namely, the fact that ≃ is a strong bisimulation with respect to the meaningful computation relation. Proof. The proof is by induction on o ≃ p and uses Lem. 5.6, Lem. 5.7 and Lem. 5.8. All the details are in the Appendix D.
Example 5.10. We illustrate the previous theorem with the following example.
Some axioms of the new relation ≃ τ can be generalized to several arguments. For that, we use the meta-notation u :: s introduced in Sec. 4, in this case denoting a term in T λµ , resulting from the application of u to a stack s of terms in T λµ , i.e. if s = t 0 · . . . · t n and t 0 . . . t n ∈ T λµ , then u :: s u t 0 . . . t n denotes a term in T λµ .
Lemma 6.2. Let t, u ∈ T λµ and c ∈ C λµ . Let s be a stack of terms in To obtain the desired correspondence between ≃ on ΛM -objects and ≃ τ on λµ-objects, it is necessary to relate the sets O ΛM and O λµ . We do so by means of an expansion function e(_), that eliminates the explicit operators of an object by dBdM-expansions: λx.e(t) e(µα.c) µα.e(c) [α] w)) y z. Then, [α] w)) y z. However, it yields an equivalent object thanks to rule ≃ τ 7 . Some basic properties of the expansion function are stated below.
Lemma 6.3. Let t, u ∈ T λµ and o ∈ O ΛM . Let L be a substitution context. Then, Proof. The first and second point are by induction on L. The third point is by straightforward induction on o. The fourth point is by induction on L using the third point. The last one is by induction on L using the fact that ≃ τ is a congruence.
The expansion function allows to ≃ τ -equate ΛM -objects that are related by the reduction → P (cf. Def. 5.1). We start with the more subtle cases → N and → C .
Lemma 6.4. Let t ∈ T λµ and c ∈ C λµ . Let s, s ′ be stacks and LCC be a CC linear context. Then, Proof. The proof is in Appendix E.
Proof. We only show the base cases: The result follows from Lem. 6.4(1).
We can then conclude that plain forms do not change by expansion. We now show that ≃ τ -equivalent λµ-objects project into ≃ by means of the plain form. Proof. The proof is in Appendix E.
For the converse we use the expansion function, i.e. ≃-equivalent ΛM -objects project into ≃ τ by means of the expansion function. Proof. The proof is in Appendix E.
The properties above allow to us conclude with the following result. Even if this last theorem relates the new ≃ τ -equivalence to the strong bisimulation ≃ presented in Section 5, the resulting property also explains the relationship between Laurent's σ-equivalence and ≃. Indeed, starting from the fact that ρ-equivalence breaks strong bisimulation (cf. example in the introduction), ρ-equivalence is restricted (through our adoption of ≃ τ 8 , ≃ τ 9 , and ≃ τ 10 in lieu of ρ) to its non-erasing and non-duplicating role in swapping names and µ-binders in the new relation ≃ τ . In this way, we keep the strictly necessary renaming operation of Laurent's original σ-equivalence which is able to materialize a correspondence with our strong bisimulation.

Conclusion
This paper is about σ-equivalence in classical logic and the negligible effect it has on the operational behavior of the terms it relates. It refines the λµ-calculus with explicit operators for substitution and replacement, by splitting in particular each of the rules β and µ of λµ into multiplicative and exponential fragments, thus resulting in the introduction of a new calculus called ΛM . This new presentation of λµ allows to reformulate σ-equivalence on λµ-terms as a strong bisimulation relation ≃ on ΛM -terms. The main obstacle to extract a bisimulation on ΛM from the original Laurent's σ-equivalence on λµ-terms is axiom ≃ ρ , which leads to σ-equivalence failing to be a strong bisimulation. We learn that we cannot remove ≃ ρ entirely, since it is needed to close several commutation diagrams in the proof of strong bisimulation. However, a restriction of ≃ ρ turns out to suffice.
In [KV19], the λµ-calculus is refined to a calculus λµs with explicit operators, together with a linear substitution/replacement operational semantics at a distance. In contrast to ΛM , λµs does not support composition of explicit replacements. In particular, explicit replacements in λµs are defined on terms, and not on stacks, thus the calculus is not able to capture an appropriate notion of bisimulation such as the one presented in this paper.
Other classical term calculi exist, e.g. [CH00, Aud94, Pol04, vBV14], but none of these formalisms decomposes term reduction by means of a fine distinction between multiplicative and exponential rules. Thus, the main ingredients needed to build a strong bisimulation are simply not available. Of particular interest would be obtain a strong bisimulation in the setting of λµ µ [CH00], a calculus inspired from sequent calculus which is constructed as a perfectly symmetric formalism to deal uniformly with CBN and CBV.
Explicit Substitutions are a means of modeling sharing in lambda calculi and hence well suited for capturing call-by-need [ABM14]. We believe ΛM may prove useful in devising a notion of call-by-need for classical computation. Our notation for explicit replacement and the notion of single replacement it supports (cf. Sec. 4.2), would play a crucial role in formulating such a calculus.
A further related reference is [HL10], where PPNs are used to interpret processes from the π-calculus. A precise correspondence is established between PPN and a typed version of the asynchronous π-calculus. Moreover, they show that Laurent's ≃ σ corresponds exactly to structural equivalence of π-calculus processes (Prop. 1 in op.cit). In [LR03] Laurent and Regnier show that there is a precise correspondence between CPS translations from classical calculi (such as λµ) into intuitionistic ones on the one hand, and translations between LLP and LL on the other.
It would be interesting to analyse other rewriting properties of our term language such as preservation of λµ-strong normalisation of the reduction relations → ΛM and or confluence of . Moreover, following the computational interpretation of deep inference provided by the intuitionistic atomic lambda-calculus [GHP13], it would be interesting to investigate a classical extension and its corresponding notion of strong bisimulation. It is also natural to wonder what would be an ideal syntax for classical logic, that is able to capture strong bisimulation by reducing the syntactical axioms to a small and simple set of equations.
Finally, our notion of ≃-equivalence could facilitate proofs of correctness between abstract machines and λµ (like [ABM14] for lambda-calculus) and help establish whether abstract machines for λµ are "reasonable" [ABM14]. Appendix A. Confluence of the ΛM -calculus To prove confluence of the ΛM -calculus we use the interpretation method [CHL96], where ΛM is projected into the λµ-calculus.
Definition A.1. The projection _ ↓ from ΛM -objects to λµ-objects is defined as Proof. Both statements are by induction on o. ( Proof. Items (1)  Last, to apply the interpretation method we need to relate the relations → ΛM and → λµ by means the projection function _ ↓ . Proof.

Appendix B. Strong Normalisation of Plain Computation
To show that plain computation is strongly normalising we define a measure over objects of the ΛM -calculus. It is worth noticing that using the standard size of an object (i.e. counting all its constructors) is not enough since it does not strictly decrease under computation due to the following remarks: (1) Rule dM discards an application while introducing a new explicit replacement, thus preserving the number of constructors in the object. (2) Rule N discards a linear explicit replacement with a stack of n elements, replacing it with n applications. The number of stack constructors in a stack of n elements turns out to be n − 1 which, together with the discarded explicit replacement, compensates the n introduced applications. (3) Rule C discards a linear explicit replacement by combining it with another one, appending their respective stacks. This introduces a new stack constructor, preserving the total number of constructors in the object.
The first remark suggests that the application constructor should have more weight than the replacement constructor to guarantee normalisation by means of a polynomial interpretation. However, the second remark suggests exactly the opposite. On another note, the third remark requires explicit replacements to have more weight than stacks.
Under these considerations we define the following measure over objects of the ΛMcalculus which turns out to be decreasing w.r.t. reduction → P .
Proof. By induction on O.
Theorem 5.2. The relation → P is terminating.
Proof. We prove o → P p implies o > p by induction on the relation → P . We first analyse all the base cases: with a > 0 and b ≥ 0. We conclude by Proof. We first remark that the rules dB, dM, N and C do not duplicate any subterm. Thus, any trivial one-step divergence between these rules can be easily closed in one step as well. Then, the only cases left to be considered are the critical pairs between them. There are only two such cases: , fc(s, LCC 2 ) given by rule C, and the conditions δ / ∈ t, δ / ∈ LCC 1 , fc(α, LCC 1 ) and fc(s ′ , LCC 1 ) given by rule N. We conclude since (t :: s ′ ) :: s = t :: (s ′ · s), thus obtaining the diagram: , fc(s, LCC 2 ) due to the outermost application of the rule, and the conditions δ / ∈ c, δ / ∈ LCC 1 , δ / ∈ s ′ , fc(α, LCC 1 ) and fc(s ′ , LCC 1 ) due to the innermost one. We conclude since s ′′ · (s ′ · s) = (s ′′ · s ′ ) · s, thus obtaining the following diagram: To prove Lem. 5.6 we introduce two auxiliary results about contexts LTT and LCC.
We address the first item, the second one is similar. The possible overlap between the P-step and L LTT t can be broken down as follows: • The step is entirely within t, i.e. t → P t ′ . Then it suffices to take L ′ = L and LTT ′ = LTT. • The step is entirely within L, i.e. L → P L ′ . Then we take t ′ = t and LTT ′ = LTT.
• The step is entirely within LTT, i.e. LTT → P LTT ′ . Then we take t ′ = t and L ′ = L.
• The step overlaps with LTT. There are four cases according to the reduction rule applied: (1) dB. There are two further cases.
(a) It overlaps with t. Then, t = L 1 λx.v , LTT = LTT 2 L 2 u and the LHS of the rule dB is L 2 L 1 λx.v u. We conclude by setting (2) dM. There are two further cases.
We recall the definition of Replacement/Renaming Contexts: (2) Suppose e = LCC R c with bn(R) / ∈ LCC and fc(R, LCC). If e → P e ′′ , then there exists Proof. We focus on the first item, the second being similar. The proof proceeds by analysing all the overlapping cases between the LHS of the P-step and R LCC c : • The step is completely within c, i.e. c → P c ′ . Then, it suffices to set R ′ = R and LCC ′ = LCC and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty.
• The step is completely within R, i.e. R → P R ′ . Then, it suffices to set c ′ = c and LCC ′ = LCC and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty. This case relies on the fact that all R contexts are also LCC contexts and that the latter are present in the patterns of the LHSs of rewrite rules defining P. • The step is completely within LCC, i.e. LCC → P LCC ′ . Then it suffices to set c ′ = c and R ′ = R and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty. • The step overlaps with LCC. There are three further cases depending on the reduction step applied and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty: (1) dB. Then, and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty: LTC :: s and the reduction sequence e ′′ ։ dM e ′ is empty: (4) C. Then there are two possibilities.
(a) It overlaps with c. Then we have c = LCC 1 c 1 β \ α s ′ , LCC = LCC 3 LCC 2 α \ α ′ s and the rule C applies to LCC 2 LCC 1 c 1 β \ α s ′ α \ α ′ s . We conclude by setting c ′ = LCC 1 c 1 β \ α ′ s ′ · s , R ′ = R and LCC ′ = LCC 3 LCC 2 and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty: We conclude with c ′ = c, R ′ = R and LCC ′ = LCC 3 LCC 2 LCC 1 β \ α ′ s ′ · s and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty: • The step overlaps with R. Since the reduction step cannot be dB nor dM, there are two further cases to consider.
(1) N. Note that it cannot overlap LCC too. Indeed, if it did then we have the context However, this is not allowed by the condition bn(R) / ∈ LCC. There are two cases to consider. (a) The step overlaps with c.
(2) C. Note that it cannot overlap LCC too. Indeed, if it did then we have the com- However, this is not allowed by the condition bn(R) / ∈ LCC. There are two cases to consider. (a) The step overlaps with c. Then d = R 1 R 2 LCC LCC 1 LCC 2 c ′ β \ α s ′ α \ α ′ s and c = LCC 1 LCC 2 c ′ β \ α s ′ and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty: and LCC ′ = LCC and the reduction sequences d ′′ ։ dM d ′ and e ′′ ։ dM e ′ are empty: • There are no further cases.

Proof.
(1) Let L LTT t ∈ T ΛM with bv(L) / ∈ LTT and fc(L, LTT). By induction on the length of the longest P reduction sequence from L LTT t to its normal form, resorting to Lem. C.1. Suppose d 0 = LTT L t ։ P P(LTT L t ) = d n is a longest reduction sequence consisting of n steps. This is depicted on the left, in the figure below. Application of Lem. C.1 will produce the subdiagram at the top of the figure. The P reduction sequence d 1 ։ dM d ′ 1 ։ P d n exists by confluence of P and, moreover, it has at most n − 1 steps since the reduction sequence from d 0 was assumed to be a longest reduction. This allows us to apply the i.h. to the reduction sequence d ′ 1 ։ P d n (as indicated in the figure) to conclude: (2) Similar to the previous item but using Lem. C.2.
Proof. By case analysis on LXC. We only illustrate one of the base cases, namely when LXC = , the others being similar. In the case that LXC = , we must have o = c. There are possible rules for commands: ≃ exrepl , ≃ exren , and ≃ ppop . The first two follow by using 5.6 (2), the last one is direct. We provide details on the ≃ ppop case.
(1) By induction on o ≃ o ′ which, by definition, implies verifying the cases for reflexivity, transitivity, symmetry and congruence (i.e. closure by contexts). All are straightforward but the latter.
where the explicit substitutions result from contracting the dB-redexes in u and the explicit replacements result from contracting the dM-redexes in s ′ followed by the resulting C-redexes. If s = u, then there are no dM-redexes to contract, and there are only explicit substitutions. Similarly, Then by Lem. 5.6, on the one hand we have: , and on the other hand we have: ; resorting to item (1) of that lemma to correctly place the explicit substitutions and item (2). We then conclude by applying ≃ exren : First note that (2) By induction on o. All cases follow from the i.h. and/or Lem. 5.7.
(2) S-redex overlaps LTT. Then we have the context We conclude by Lem. 5.6 (1): We conclude by Lem. 5.6 (1): [α] t and p = t with α / ∈ t. This case is immediate since all reduction steps must be in t.
• Q = . Then, o = q ≃ * q ′ = p. Moreover, in this case o, p ∈ C ΛM . We only detail the cases where there is an overlap between the equivalence and the reduction rules, the others being immediate. There are three rules applicable to commands: -≃ exrepl . Then, o = LCC c α \ α ′ s and p = LCC c α \ α ′ s with α / ∈ LCC, fc(α ′ , LCC) and fc(s, LCC). There are three further possible cases: (1) R • -redex at the root.
(1) We address item (1) first, by induction on the size of LCC.    • LCC = [δ] LTC. We proceed by analyzing the shape of LTC.