A Rewriting Logic Approach to Specification, Proof-search, and Meta-proofs in Sequent Systems

This paper develops an algorithmic-based approach for proving inductive properties of propositional sequent systems such as admissibility, invertibility, cut-elimination, and identity expansion. Although undecidable in general, these structural properties are crucial in proof theory because they can reduce the proof-search effort and further be used as scaffolding for obtaining other meta-results such as consistency. The algorithms -- which take advantage of the rewriting logic meta-logical framework, and use rewrite- and narrowing-based reasoning -- are explained in detail and illustrated with examples throughout the paper. They have been fully mechanized in the L-Framework, thus offering both a formal specification language and off-the-shelf mechanization of the proof-search algorithms coming together with semi-decision procedures for proving theorems and meta-theorems of the object system. As illustrated with case studies in the paper, the L-Framework, achieves a great degree of automation when used on several propositional sequent systems, including single conclusion and multi-conclusion intuitionistic logic, classical logic, classical linear logic and its dyadic system, intuitionistic linear logic, and normal modal logics.

⊤ denotes the empty sequent. Therefore, as a logical framework, rewriting logic directly serves the purpose of providing a proof-search algorithm at the object level of S . Since rewriting logic is reflective, a universal rewrite theory U is further used to prove meta-logical properties (e.g., cut-freeness) of the system S . More precisely, such proofs are specified as reachability goals of the form U ⊢ S ;Γ * → S ; ⊤ , where Γ is a set of sequents representing the proof-obligations to be discharged, and S and Γ are the meta-representation of S and Γ in the syntax of the universal theory U , respectively. In this way, starting with the specification of the propositional sequent S in rewriting logic, the framework presented in this paper enables both the proof of theorems and inductive meta-theorems of S .
The framework comprises a generic rewrite theory offering sorts, operators, and standard procedures for proof-search at the object-and meta-level. The sorts are used to represent different elements of the syntactic structure of a sequent system such as formulas, sets of formulas, sequents, and collections of sequents. The inference rules of a sequent system are specified as reversed rewrite rules, so that an inference step of the mechanized system is carried out by matching a sequent to the conclusion of an inference rule, and replacing the latter by the corresponding substitution instance of the rule's premises. Conceptually, a proof at the object level is found when all proof-obligations are conclusions of inference rules without premises (i.e., axioms). Once a sequent system S is specified, proofs of its cut-freeness, admissibility, invertibility, and identity expansion properties can be searched for. The process of obtaining proofs for each one of these properties follows a similar pattern. Namely, the rules of S are used by pair-wise comparison and narrowing to generate the inductive hypothesis H and the proof obligations Γ . Therefore, if for each pair (H ,Γ ) the (meta-)reachability goal U ⊢ S ∪ H ;Γ * → S ∪ H ; ⊤ can be answered positively, then the corresponding property of the sequent system S holds. The universal theory U contains the rewrite-and narrowing-based heuristics that operate over S ∪ H and Γ . Much of the effort in obtaining the proposed framework was devoted designing such meta-level heuristics and fully implementing them in Maude [8], a high-performance reflective language and system supporting rewriting logic.
The case studies presented in this paper comprise several propositional sequent systems, encompassing different proof-theoretical aspects. The chosen systems include: single conclusion (G3ip) and multi-conclusion intuitionistic logic (mLJ), classical logic (G3cp), classical linear logic (LL) and its dyadic system (DyLL), intuitionistic linear logic (ILL), and normal modal logics (K and S4). Beyond advocating for the use of rewriting logic as a meta-logical framework, the novel algorithms presented here are able to automatically discharge many proof obligations and ultimately obtain the expected results.
This paper is an extended version of the work [30], bringing not only new results and analyses, but also new uses of the L-Framework tool. In particular: -Properties and the spectrum of logics: the procedures have been updated and extended to automatically check invertibility of rules in a larger class of logical systems, including modal logics and variants of linear logic. Moreover, the current analyses prove stronger properties: invertibility lemmas and admissibility of structural rules are shown to be height-preserving (a fact needed in the proof of cut-elimination). For that, the definitions and necessary conditions in Section 5 were refined. -New reasoning techniques: cut-elimination is probably the most important property of sequent systems. This paper shows how to use the proposed framework to automatically discard most of the cases in proofs of cut-elimination. In some cases, the proposed algorithms are able to complete entire cut-elimination proofs.
The cases of failure are also interesting, since they tell much about the reasons for such failure. Some are fixable by proving and adding invertibility and admissibility lemmas. Others can be used in order to shed light on the reason for the failure. Finally, this work also addresses the dual property of identity expansion. Together with cut-elimination, this property is core on designing harmonical systems [12]. -Pretty printing and output: the previous implementation has been updated to generate proof terms that can be used, e.g., to produce L A T E X files with the proof trees in the meta-proofs. This allows for generating documents with complete and detailed proofs of several results in the literature, as well as identifying dependencies among the different theorems.
Outline. The rest of the paper is organized as follows. Section 2 introduces the structural properties of sequent systems that are considered in this work and Section 3 presents order-sorted rewriting logic and its main features as a logical framework. Section 4 presents the machinery used for specifying sequent systems as rewrite theories. Then, Section 5 establishes how to prove the structural properties based on a rewriting approach. Section 6 addresses cut-elimination and identity expansion. The design principles behind the L-Framework are described in Section 7. Section 8 presents different sequent systems and properties that can be proved with the approach. Finally, Section 9 concludes the paper and presents some future research directions.

Structural Properties of Sequent-Based Proof Systems
This section presents and illustrates with examples the properties of admissibility and invertibility of rules in sequent systems [27,35]. Additional notation and standard definitions are established to make the text self-contained.
Definition 1 (Sequent) Let L be a formal language consisting of well-formed formulas. A sequent is an expression of the form Γ ⊢ ∆ where Γ (the antecedent) and ∆ (the succedent) are finite multisets of formulas in L , and ⊢ is the consequence symbol. If the succedent of a sequent contains at most one formula, it is called singleconclusion; otherwise, it is called multiple-conclusion.
Definition 2 (Sequent System) A sequent system S is a set of inference rules. Each inference rule is a rule schema and a rule instance is a rule application. An inference rule has the form S 1 · · · S n S r Fig. 1: System G3ip for propositional intuitionistic logic. In the I axiom, p is atomic.
where the sequent S is the conclusion inferred from the premise sequents S 1 , . . . , S n in the rule r. If the set of premises is empty, then r is an axiom. In a rule introducing a connective, the formula with that connective in the conclusion sequent is the principal formula. When sequents are restricted to having empty antecedents (hence being of the shape ⊢ ∆ ), the system is called one-sided; otherwise it is called two-sided.
As an example, Fig. 1 presents the two-sided single-conclusion propositional intuitionistic sequent system G3ip [35], with formulas built from the grammar: where p ranges over atomic propositions. In this system, for instance, the formula F ∨ G in the conclusion sequent of the inference rule ∨ L is the principal formula.

Definition 3 (Derivation)
A derivation in a sequent system S (called S -derivation) is a finite rooted tree with nodes labeled by sequents, axioms at the top nodes, and where each node is connected with the (immediate) successor nodes (if any) according to its inference rules. A sequent S is derivable in the sequent system S , notation S ◮ S, iff there is a derivation of S in S . The system S is usually omitted when it can be inferred from the context.
It is important to distinguish the two different notions associated to the symbols ⊢ and ◮, namely: the former is used to build sequents and the latter denotes derivability in a sequent system.

Definition 4 (Height of derivation)
The height of a derivation is the greatest number of successive applications of rules in it, where an axiom has height 1. The expression S ◮ n Γ ⊢ ∆ denotes that the sequent Γ ⊢ ∆ is derivable in S with a height of derivation at most n. A property is said height-preserving if such n is an invariant. The annotated sequent Γ ⊢ n ∆ is a shorthand for S ◮ n Γ ⊢ ∆ when S is clear from the context. Moreover, when Γ and ∆ are unimportant or irrelevant, this notation is further simplified to n : S.
In what follows, annotated sequents are freely used in inference rules. For instance, if s(·) represents the successor function on natural numbers, then n : S 1 . . . n : S k s(n) : S r represents the annotated rule r stating that if S ◮ n S i for each i = 1, . . . , k, then S ◮ s(n) S It may be useful to derive new rules from the ones initially proposed in a system, since this may ease the reasoning when proving properties. Such derived new rules are said to be admissible in the system. Invertibility, on the other hand, is the admissibility of "upside-down" rules, where the premises of a rule are derived from the conclusion. Invertibility is one of the most important properties in proof theory, since it is the core of proofs of cut-elimination [13], as well as the basis for tailoring focused proof systems [1,21].
Definition 5 (Admissibility and Invertibility) Let S be a sequent system. An inference rule S 1 · · · S k S is called: i. admissible in S if S is derivable in S whenever S 1 , . . . , S k are derivable in S . ii. invertible in S if the rules S S 1 , . . . , S S k are admissible in S .
The admissibility of structural rules is often required in the proof of other properties. A structural rule does not introduce logical connectives, but instead changes the structure of the sequent. Since sequents are built from multisets, such changes are related to the cardinality of a formula or its presence/absence in a context.
In the intuitionistic setting, the structural rules for weakening and contraction are height-preserving admissible in G3ip. A proof of the admissibility of weakening can be built by induction on the height of derivations (considering all possible rule applications) and it is often independent of any other results. For example, suppose that G3ip ◮ n Γ ⊢ C and consider, e.g., the case where C = A ⊃ B and that the last rule applied in the proof of By inductive hypothesis, the sequent Γ , ∆ , A ⊢ i B is derivable and then, The same exercise can be done for the other rules of the system, thus showing that W is height-preserving admissible: On the other hand, proving invertibility may require the admissibility of weakening. For example, for proving that ⊃ R is height-preserving invertible in G3ip, one has to show that G3ip ◮ n Γ , F ⊢ G whenever G3ip ◮ n Γ ⊢ F ⊃ G. The proof follows by induction on the height of the derivation of Γ ⊢ F ⊃ G. Consider, e.g., the case when Γ = Γ ′ , A ⊃ B and the last rule applied is ⊃ L By inductive hypothesis on the right premise, Γ ′ , B, F ⊢ i G is derivable. Considering the left premise, since Γ ⊢ i A is derivable, height-preserving admissibility of weakening implies that Γ , F ⊢ i A is derivable and the result follows: Note that not all introduction rules in G3ip are invertible: if p 1 , p 2 are different atomic propositions, then p i ⊢ p 1 ∨ p 2 is derivable for i = 1, 2, but p i ⊢ p j is not for i = j. Indeed, ∨ R i and ⊃ L are the only non-invertible rules in G3ip.
Finally, admissibility of contraction (C) often depends on invertibility results. As an example, consider that G3ip The inductive hypothesis cannot be applied since the premises do not have duplicated copies of formulas. Since ∨ L is height-preserving invertible, the derivability of By induction, Γ , F ⊢ i C and Γ , G ⊢ i C are derivable and the result follows: Invertibility and admissibility results will be largely used for showing cut-elimination in Section 6.

Rewriting Logic Preliminaries
This section briefly explains order-sorted rewriting logic [24] and its main features as a logical framework. Maude [8] is a language and tool supporting the formal specification and analysis of rewrite theories, which are the specification units of rewriting logic.
An order-sorted signature Σ is a tuple Σ =(S, ≤, F) with a finite poset of sorts (S, ≤) and a set of function symbols F typed with sorts in S, which can be subsortoverloaded. For X = {X s } s∈S an S-indexed family of disjoint variable sets with each X s countably infinite, the set of terms of sort s and the set of ground terms of sort s are denoted, respectively, by T Σ (X) s and T Σ ,s ; similarly, T Σ (X) and T Σ denote the set of terms and the set of ground terms. A substitution is an S-indexed mapping θ : X −→ T Σ (X) that is different from the identity only for a finite subset of X and such that θ (x) ∈ T Σ (X) s if x ∈ X s , for any x ∈ X and s ∈ S. A substitution θ is called ground iff θ (x) ∈ T Σ or θ (x) = x for any x ∈ X. The application of a substitution θ to a term t is denoted by tθ . Acquaintance with the usual notions of position p in a term t, subterm t| p at position p, and term replacement t[u] p at of t's subterm at position p with term u is assumed (see, e.g., [11]). The expression u t (resp., u ≺ t) denotes that term u is a subterm (resp., proper subterm) of term t. Given a term t ∈ T Σ (X), t ∈ T (S,≤,F∪C t ) (X) is the ground term obtained from t by turning each variable x ∈ vars(t) of sort s ∈ S into the fresh constant x of sort s and where an order-sorted equational theory with signature Σ , E a set of (possibly conditional) equations over T Σ (X), and B a set of structural axioms -disjoint from the set of equations Eover T Σ (X) for which there is a finitary matching algorithm (e.g., associativity, commutativity, and identity, or combinations of them); and (ii) R a finite set of rewrite rules over T Σ (X) (possibly with equational conditions). A rewrite theory R induces a rewrite relation → R on T Σ (X) defined for every t, u ∈ T Σ (X) by t → R u if and only if there is a one-step rewrite proof of (∀X)t → u in R. More precisely, t → R u iff there is a rule (l → r if γ) ∈ R, a term t ′ , a position p in t ′ , and a substitution θ : . An inductive property of a rewrite theory R does not need to hold for any model of R, but just for T R . Using a suitable inductive inference system, for example, one based on a convenient notion of constructors as proposed in [34], the semantic entailment |= in T R can be underapproximated by an inductive inference relation in R, which is shown to be sound with respect to |= (i.e., for any property ϕ, if R ϕ, then T R |= ϕ). A Σ -sentence (∀X) ϕ is called an inductive consequence of R iff R (∀X) ϕ, and this implies that T R |= ϕ.
Appropriate requirements are needed to make an equational theory R executable in Maude. It is assumed that the equations E can be oriented into a set of (possibly conditional) sort-decreasing, operationally terminating, and confluent rewrite rules − → E modulo B [8]. For a rewrite theory R, the rewrite relation → R is undecidable in general, even if its underlying equational theory is executable, unless conditions such as coherence [36] are given (i.e., rewriting with → R/E⊎B can be decomposed into rewriting with → E/B and → R/B ). The executability of a rewrite theory R ultimately means that its mathematical and execution semantics coincide.
In this paper, the rewriting logic specification of a sequent system S is a rewrite theory R S = (Σ S , E S ⊎ B S , R S ) where: Σ S is an order-sorted signature describing the syntax of the logic S ; E S is a set of executable equations modulo the structural axioms B S ; and R S is a set of executable rewrite rules modulo B S capturing those non-deterministic aspects of logical inference in S that require proof search. The point is that although both the computation rules E S and the deduction rules R S are executed by rewriting modulo B S , by the executability assumptions on R S , the rewrite relation → E S /B S has a single outcome in the form of a canonical form and thus can be executed blindly with a "don't care" non-deterministic strategy. Furthermore, B S provides yet one more level of computational automation in the form of B S -matching and B S -unification algorithms. This interplay between axioms, equations, and rewrite rules can ultimately make the executable specification R S very efficient with modest memory requirements.
Finally, the expression CSU B (t, u) denotes the complete set of unifiers of terms t and u modulo the structural axioms B. Recall that for each substitution σ : X −→ T Σ (X), there are substitutions θ ∈ CSU B (t, u) and γ : X −→ T Σ (X) such that σ = B θ γ. For a combination of free and associative and/or commutative and/or identity axioms B, except for symbols that are associative but not commutative, such a finitary unification algorithm exists [16]. In the case of R S , the structural axioms are identity, commutativity, and associativity, which are the usual structural axioms for multisets of formulas and sequents.

A Rewriting View of Sequent Systems
This section presents the machinery used for specifying a propositional sequent system S as a rewrite theory R S . Such a framework is equipped with sorts that represent formulas, set of formulas, sequents, and lists of sequents. The user of the framework is expected to inhabit the sort for formulas in R S with the concrete syntax of the system S . This has the immediate effect of fully inhabiting the remaining sorts of R S . As shown below, rewrite rules in R S correspond to backwards inference rules of S , so that proof-search in the former is successful whenever all leaves in a proof-tree are instances of axioms. The system G3ip ( Fig. 1) is used throughout the section for illustrating the proposed approach.
The notation of Maude [8] is adopted as an alternative representation to the rewriting logic one introduced in Section 3. This decision has the immediate effect of producing an executable specification of sequent systems, while providing a precise mathematical semantics of the given definitions. Additional details about the implementation in Maude are given in Section 7.
As a reference for the development of this section, Maude declarations are summarized next. mod M is ... endm ---rewrite theory M sort S .
---rewrite rule with name 'rule-name' crl [rule-name] : l => r if c . ---conditional rewrite rule An order-sorted signature Σ S defining the sorts for building formulas (and multisets of formulas and sequents, which are introduced later), is assumed: ---Atomic propositions sort Formula .
---Formulas subsort Prop < Formula . ---Atomic propositions are formulas The object logic to be specified must provide suitable constructors for these two sorts. For instance: mod G3i is ... op p : Nat -> Prop .
---False/bottom ... endm Atomic propositions take the form p(n), where the natural number n is the identifier of the proposition (e.g., the proposition p 3 is written as p (3)). Constructors for Prop are not allowed to have arguments of type Formula.
Multisets of formulas are built in the usual way: Given two multisets M and N, the term M;N denotes M ⊎ N. Note that the mixfix operator op _;_ (in Maude, _ denotes the position of the parameters) is declared with three structural axioms: associativity, commutativity, and the empty multiset as its identity element. Due to the subsort relation Formula < MSFormula, the sort Formula is the least sort of a formula F, while F is also a singleton containing F.
Sequents, as expected, are built as pairs of multisets of formulas. A goal is a list of sequents to be proved: The attribute frozen used in the declaration of the operator op _|_ (goals' concatenation) means that inference steps can only be performed on the head of a non-empty list of goals (i.e., the usual 'head-tails' recursive structure). More precisely, the attribute frozen(2) defines a rewriting strategy where the second parameter cannot be subject of rewriting. In this way, only the first sequent S in the list S | G can be reduced until it becomes proved (when possible), as will be explained shortly.
Inference rules in the rewrite theory R S are specified as rewrite rules that rewrite list of sequents. There are two options for expressing an inference rule as a rewrite rule. Namely, they can be axiomatized as backwards inference (i.e., from conclusion to premises) or as forward inference (i.e., from premises to conclusion). In this paper, as explained in Section 3, the backwards inference approach is adopted, so that a proof-search process advances by rewriting the target goal of an inference rule to its premises, thus aiming at a bottom-up proof-search procedure. For instance, the initial rule, and the left and right introduction rules for conjunction in the system G3ip are specified as follows: Variables in rules are implicitly universally quantified. The type of variables is specified with the syntax var x : S . Hence, the initial rule must be read as The constant proved denotes the empty list of goals or, equivalently, the empty collection of (pending) proof obligations. Rules with more than one premise, such as AndR, are specified using op: _|_. Modulo the axioms B S , a term proved | G is structurally equal to G, thus making the goal G automatically active for proof-search under the rewrite strategy declared for goals.
As illustrated with the running example, the syntax of the object logic in R S , as well as its inference rules, is straightforward. This is usually called in rewriting logic the ε-representational distance [25], where the system being specified mathematically as a rewrite theory and its codification in Maude as a system module are basically the same. This is certainly an appealing feature that can widen the adoption of the framework proposed here for implementing and analyzing sequent systems.
In face of the ε-representational distance, the adequacy of R S with respect to S follows from the soundness of rewriting logic itself.
Proposition 1 (Adequacy) Let S be a sequent system, R S the resulting rewrite theory encoding the syntax and inference rules of S , S a sequent in S , and t S its representation in R S . Then, Observe that the proof of a sequent S in S may follow different strategies depending on the order the subgoals are proved. Such strategies are irrelevant because all the branches in the derivation of a proof must be closed (i.e., ending with an instance of an axiom). Hence, if S is provable, it can be proved using the strategy enforced by op _|_ in R S : always trying to first solve the left-most subgoal of the pending goals.
Let r be a sequent rule in S and R the corresponding rewrite rule in R S . Modulo the associativity and commutativity of multisets of formulas, it is easy to show that, for any sequent S and its representation t S , t S 1 → R S t ′ S iff t ′ S is the representation of the premises (where proved means no premises) obtained when r is applied (on the same active formula) in S. Hence, rewrite steps are in one-to-one correspondence with proof search (bottom-up) steps in S -derivations.
When using the proposed framework, the resulting rewrite theory becomes a proof search procedure. For instance, Maude's command answers the question of whether the sequent p 1 ∧ p 2 ⊢ p 2 ∧ p 1 is provable in G3ip.
For the proof analyses later developed in this paper, operations to the rewrite theory R S are added. A new constructor for annotated sequents (see Definition 4) and also a copy of the inference rules dealing with annotations are included: The function application s(.) denotes the successor function on Nat. Note that axioms are annotated with an arbitrary height k ≥ 1. In rules with two premises, both of them are marked with the same label. This is, without loss of generality, since it is always possible to obtain larger annotations from shorter ones (see Theorem 1).
In the rest of the paper, given a sequent system S , R S will denote the resulting rewrite theory that encodes the syntax, inference, and annotated inference rules of S . By abusing the notation, when a sequent S ∈ S is used in the context of the rewrite theory R S (e.g., in S → R S S ′ ), such S must be understood as the corresponding term t S ∈ Σ R S (X) representing S in R S . Similarly, "a sequent rule r" in the context of the theory R S is to be understood as "the representation of r in R S ". The expression R 1 ∪ R 2 denotes the extension of the theory R 1 by adding the inference rules of R 2 (and vice versa); in this case, the sequents in the resulting theory R 1 ∪ R 2 are terms in the signature Σ R 1 ∪ Σ R 2 . If S is a sequent, the expression R ∪ {S} denotes the extension resulting from R by adding the sequent S as an axiom, understood as a zero-premise rule (i.e., R is extended with the rule rl [ax] S => proved .). Moreover, given a rewrite theory R and a sequent S, the notation R S means R S * → proved, i.e., there is a derivation of S in the system specified by R. Similarly, for annotated sequents, R n : S means R (n : S) * → proved.
Theorem 1 Let S be a sequent system, S a sequent, and k ≥ 1. Then, Proof By induction on k. If k = 1, then S is necessarily an instance of an axiom and s(1) : S → R S proved using the same axiom. The case k > 1 is immediate by applying induction on the premises (with shorter derivations).

Proving Admissibility and Invertibility
This section presents rewrite-and narrowing-based techniques for proving admissibility and invertibility of inference rules of a sequent system specified as a rewrite theory R S (see Section 4). They are presented as meta-theorems about sequent systems with the help of rewrite-related scaffolding, such as terms and substitutions, and they provide sufficient conditions for obtaining the desired properties. The system G3ip is, as in previous sections, used as running example (Section 8 presents the complete set of case studies). The procedures proposed here heavily depend on unification, hence a brief discussion on the subject is presented next. First, note that, if no additional axioms are added to the symbols in the syntax of the object logic S , the existence of a unification algorithm for terms in the sort Sequent is guaranteed. As a matter of example, consider the unification of the conclusions of the right and left rules for conjunction in G3ip. Unification problems take the form t1 =? t1' /\ ... /\ tn =? tn' and Maude's command variant unify computes the set of unifiers modulo the declared axioms in the theory: Observe that the variables of the second rule are renamed to avoid clash of names. The term %1:Formula denotes a fresh variable of sort Formula. Let t and t ′ be the two terms in the above unification problem and θ the (unique) substitution computed. Consider the least signature Σ ′ R S that contains Σ R S as well as a fresh constant %i.Type for each variable %i:Type in tθ . Note the ":" in the variable %i:Type and the "." in the constant %i.Type. To avoid confusion, when the sort can be inferred from the context, the constant %i.Type is written as %i, but variables always carry their typing information. In this example, Σ ′ R S extends Σ R S with constants op %1 : -> Formula ., op %6 : -> MSFormula ., etc. The ground term tθ in the signature Σ ′ R S is s(%4) : %1 /\ %2 ; %6 |--%4 /\ %5. As expected, on this (ground) sequent it is possible to apply both AndL and AndR.

Admissibility of Rules
This section introduces sufficient conditions to prove theorems of the form if S ◮ n Γ ′ ⊢ ∆ ′ , then S ◮ n Γ ⊢ ∆ i.e., height preserving admissibility of the rule 2 Admissibility is often proved by induction on the height of the derivation π of the sequent Γ ′ ⊢ ∆ ′ in combination with case analysis on the last rule applied in π. It turns out that this analysis may depend on other results. For example, as illustrated in Section 2, proving admissibility of contraction often depends on invertibility results.
Hence, any general definition of admissibility of rules in the rewriting setting has to internalize such reasoning. This will be formalized by closing the leaves in π w.r.t. theorems of the form "if S 1 is provable, so is S 2 ". Such theorems will be encoded as rewrite rules of the form r : S 1 → S 2 . More precisely, let t s be a ground term denoting a sequent and r : t → t ′ be a rule. The set is the least set containing t s and the resulting premises after the instantiation of r in t s . Let T s be a set of ground terms and R = {r 1 , ...r m } be a set of theorems of the form r k : t k − → t k 1 . Then, Proofs of admissibility need to consider all the cases of the last rule applied in a proof π. Definition 6 identifies rules that are height-preserving admissible relative to one of the sequent rules of the system. Definition 6 (Local admissibility) Let S be a sequent system, I be a (possibly empty) set of rules, and r t ∈ S and r s be rules given by The rule r s is height-preserving admissible relative to r t in S under the assumptions I iff assuming that i : S 1 is provable then, for each θ ∈ CSU(i : For proving admissibility of the rule r s , the goal is to prove that, if S 1 is derivable with height i, then S is also derivable with at most the same height. The proof follows by induction on i (the height of a derivation π of S 1 ). Suppose that the last rule applied in π is r t . This is only possible if S 1 and T "are the same", up to substitutions. Hence, the idea is that each unifier θ of i : S 1 and s(k) : T covers the cases where the rule r t can be applied on the sequent S 1 . For computing this unifier, it is assumed that the rules do not share variables (the implementation takes care of renaming the variables if needed). A proof obligation is generated for each unifier. Consider, for instance, the proof obligation of the ground sequent (i : S)θ for a given θ ∈ CSU(i : S 1 , s(k) : T ). This means that, as hypothesis, the derivation below is valid Hence, all the premises in this derivation are assumed derivable. This is the purpose of extending the theory with the following set of ground sequents (which are interpreted as rules of the form t − → proved): The proof of admissibility theorems may require auxiliary lemmas. Assuming the theorems I , the sequents resulting after an application of r ∈ I to the sequents in Equation 3 can be also assumed to be provable (notation [[·]] I ). The typical instantiation of I in admissibility analysis will be the already proved invertibility lemmas.
If θ ∈ CSU(i : S 1 , s(k) : T ), then θ should map k to a fresh variable, say %1:Nat and i to s(%1:Nat). Hence, the (ground) goal (i : S)θ to be proved takes the form s(%1) : Sθ where %1= kθ is the freshly generated constant in the extended signature. By induction, it can be assumed that the theorem (i.e., S 1 implies S) is valid for shorter derivations, i.e., derivations of height at most %1. This is the purpose of the added rule where the height of the derivation is "frozen" to be the constant kθ . This allows for applying r s only on sequent of height kθ . In particular, induction can be used on all the premises of the rule r t in Equation (2).
If it is possible to show that the ground sequent (i : S)θ is derivable from the extended rewrite theory, then the admissibility result will work for the particular case when r t is the last applied rule in the derivation π of S 1 . Since a complete set of unifiers is finite for terms of the sort Sequent, then there are finitely many proof obligations to discharge in order to check if a rule is admissible relative to a rule in a sequent system. Observe that the set CSU(i : S, s(k) : T ) may be empty. In this case, the set of proof obligations is empty and the property vacuously holds.
The base cases in this proof correspond to axiom rules. Consider for instance the case where r t is the initial rule in the system G3ip. Since there are no premises, the set in Equation (3) is empty and hence there are no ground sequents of the form %1:T j θ as hypothesis. Consider the hypothetical case that s(%1):Sθ is rewritten to a term of the form %1: S' by using another rule of the system (different from the initial rule r t ). Note that it is not possible to use the rule in Equation (4) to reduce the goal to proved: an application of this rule produces inevitably yet another sequent of the form %1: S''. Thus the only hope to finishing the proof is to apply the initial rule on the sequent s(%1):Sθ and the rule in Equation (4) is never used in such a proof.
The notion of admissibility relative to a rule is the key step in the inductive argument, when establishing sufficient conditions for the admissibility of a rule in a sequent system. Theorem 2 Let S be a sequent system, I be a (possibly empty) set of rules, and Γ ′ ⊢ ∆ ′ Γ ⊢ ∆ r s be an inference rule. If r s is admissible relative to each r t in S under the assumption I , then r s is height-preserving admissible in S under the assumption I , i.e., S ◮ n Γ ′ ⊢ ∆ ′ implies S ◮ n Γ ⊢ ∆ assuming the theorems I .
Proof Assume that Γ ′ ⊢ n ∆ ′ is derivable in the system S . The proof proceeds by induction on n with case analysis on the last rule applied. Assume that the last applied rule is r t . By hypothesis (using Definition 6), it can be concluded that Γ ⊢ n ∆ is derivable and the result follows.
It is important to highlight the rationale behind Definition 6, which is similar to the results in the forthcoming sections. The proof search procedures try to show that (i : S)θ is provable by reducing it to provable (Proposition 1). Since rules in R S are encoded in a backward fashion (rewriting the conclusion into the premises), the procedure attempts to build a bottom-up derivation of that sequent. The set of assumptions (Equation (3)) is added as axioms and it is closed under the rules in I . The closure reflects a forward reasoning: if the (ground) sequent S is provable and theorem r ∈ I applies on it, then the right-hand side of r can be also assumed to be provable. During the search procedure, a goal is immediately discharged if it belongs to that set of assumptions. Finally, the induction principle is encoded by specializing the theorem to be proved, and allowing applications of it to ground sequents with shorter height annotations (Equation (4)).
The proof of admissibility of weakening for G3ip and some other systems studied in Section 8 does not rely on any result (and hence, I is empty). Also, the proof of Theorem 1, mechanized by stating it as the admissibility of the annotated rule is also proved with I = / 0 in all the systems in Section 8. The proof of contraction requires some invertibility lemmas and they must be added to I . The use of I thus presents a convenient and modular approach because properties can be proved incrementally.

Invertibility of Rules
This section gives a general definition for proving height-preserving invertibility of rules. Observe that such analysis is done premise-wise. The case of rules with several premises is performed for each one of them separately. For instance, consider the rule ⊃ L of G3ip, and let ⊃ L1 and ⊃ L2 be the rules It can be shown that ⊃ L1 is not admissible while ⊃ L2 is. Hence, ⊃ L is invertible only in its right premise (see Definition 5). Some invertibility results depend on, e.g., the admissibility of weakening. Hence, for modularity, the invertibility analysis is parametric under a set of rules H of admissible rules of the system (that can be used during the proof search procedure).
Definition 7 (Local invertibility) Let S be a sequent system and H be a (possibly empty) set of rules. Consider the following annotated inference rules: Under the assumption H , the premise l ∈ 1..m of the rule r s is height-preserving invertible relative to the rule r t iff for each θ ∈ CSU(s(k) : S, s(m) : T ): where the variables in S and T are assumed disjoint. Under the assumption H , the rule r s is height-preserving invertible relative to r t if all its premises are.
In order to show the invertibility of a rule r s , the goal is to check that derivability is not lost when moving from the conclusion S to the premises S 1 , · · · , S m . Each premise S l entails a different proof obligation. Let S l S r sl be the "sliced" version of rule r s when the case of the premise l ∈ 1..m is considered. The proof is by induction on the derivation π of S. Suppose that the last rule applied in π is r t . Observe that this is only possible if S and T unify. Let θ ∈ CSU(s(k) : S, s(m) : T ) and assume that θ (k) =%1:Nat (and hence θ (m) =%1:Nat). The resulting derivation π takes the form %1 : The premises of this derivation are assumed to be provable and the theory is extended with the axioms (m : T j )θ . Since those premises have shorter derivations, induction applies on them. More precisely, given that the ground sequent (m : T j )θ is provable and the rule r sl can be applied on it, the resulting sequent after the application of r sl is also provable with the same heigh of derivation mθ : if This inductive reasoning is captured by the expression The unifier γ checks whether it is possible to apply r sl on the premise T j . If this is the case, the resulting premise S l γ is added as an axiom. If, from the extended theory, it is possible to prove derivable the premise S l with height kθ , then the invertibility result will work for the particular case when r t was the last applied rule in the derivation π of S. If the set CSU(s(k)S, s(m) : T ) is empty, it means that the rules r t and r s cannot be applied on the same sequent and the property vacuously holds for that particular case of r t . For instance, in G3ip, the proof of invertibility of ∧ R does not need to consider the case of invertibility relative to ∨ R since it is not possible to have, at the same time, a conjunction and a disjunction on the succedent of the sequent. In multiple-conclusion systems as e.g, G3cp (see Section 8.3), this proof obligation is not vacuously discarded.
Theorem 3 presents sufficient conditions for checking the invertibility of a rule r s in a sequent system. Note that if r s is a rule without premises (i.e., an axiom), the result vacuously holds. Hence, only the case for m ≥ 1 premises is considered. The proof of the next theorem is similar to the one given for Theorem 2.
Theorem 3 Let S be a sequent system, H be a set of rules, and r s be an inference rule with m ≥ 1 premises in S . Let l ∈ 1..m. If the premise l of r s is height-preserving invertible relative to each r t in S under the assumption H , then the premise l in r s is height-preserving invertible in S under the assumption H .
Most of the proofs of invertibility require Theorem 1 and, in some cases, admissibility of weakening. Hence, the assumption H must contain those theorems for each case. The dependencies among the different proofs will be stated in Section 8 for each of the systems under analysis.

Proving Cut-Elimination and Identity Expansion
One of the most fundamental properties of a proof system is analyticity, expressing that in a proof of a given formula F, only subformulas of F need to appear. In sequent calculus, this property is often proved by showing that the cut-rule is admissible. Roughly, the cut-rule introduces an auxiliary lemma A in the reasoning "if A follows from C and B follows from A, then B follows from C". The admissibility of the cut rule (known as the cut-elimination property) states that adding intermediary lemmas does not change the set of theorems of the logical system. That is, the lemma A is not needed in the proofs of the system. This implies that, if B is provable from the hypothesis C, then there exists a direct (cut-free) proof of this fact.
The cut-rule may take different forms depending on the logical system at hand. For concreteness, consider the following cut-rule for the system G3ip: This rule has an additive flavor: the context Γ is shared by the premises. Later, different cut-rules will be considered, including multiplicative-like cut-rules (splitting the context Γ in the premises) and also cut-rules for one-sided sequent systems.
Gentzen's style proof of admissibility of the cut-rule [13] generally follows by proving that top-most applications of cut can be eliminated. This is usually done via a nested induction on the complexity of the cut-lemma A and sub-induction on the cut-height, i.e., the sum of the heights of the derivations of the premises. In the following, the rationale behind this cut-elimination procedure will be formalized by means of rewrite-based conditions. The next section discusses how these conditions become a semi-automatic procedure for checking the cut-elimination property for different logical systems.
The analyses in Section 5 showed how to inductively reason on the height of derivations inside the rewriting logic framework. But what about the induction on the complexity of formulas? In general, it is possible to inductively reason about terms built from algebraic specifications. The unification of a sequent against the conclusion of an inference rule r s ∈ S uniquely determines the active formula F introduced by r s . Since terms in the sort Formula are inductively generated from the constructors of that sort, special attention can be given to the sub-terms (if any) of F since those are the only candidates that are useful to build the needed inductive hypothesis in the cut-elimination proof. Such sub-terms are called auxiliary formulas.
Gentzen's procedure consists of reducing topmost cut-applications to the atomic case, then showing that these cuts can be eliminated. The reduction step to be performed depends on the status of the cut-formula in the premises: principal on both or non-principal on at least one premise. This procedure is formalized next.
Principal Cases. If the cut-formula is principal (see Definition 2) in both premises of the cut-rule, then induction on the complexity of the cut-formula is applied. For instance, consider the case when the cut-formula is A ∧ B: Both applications of the cut-rule in the right-hand derivation are on smaller formulas and then induction applies. Note that this kind of reduction does not necessarily preserve the height of the derivation. Hence, no height-annotation can be used on the resulting premises. Note also that weakening is needed in order to match the derivation leaves.
If the cut-formula A ∧ B is frozen (making A, B constants of type Formula), then the inductive reasoning is formalized with the following rules: Here cA, cB are constants representing the sub-terms of the ground term A ∧ B. The other terms are variables. More generally, if the cut formula is a term of the form f (t 1 , ...,t n ), each t i of sort Formula gives rises to a different rule. In the case of constants (e.g., False) and atomic propositions (no sub-terms of sort Formula), the set of generated rules is empty.
Non-principal Cases. The non-principal cases in the proof of cut-elimination require permuting up the application of the cut-rule w.r.t. the application of an inference rule, thus reducing the cut-height. As an example, assume that the left premise of the cutrule is the conclusion of an application of the rule ∧ L . Hence, it must be the case that the antecedent of the sequent contains a conjunct F ∧ G. The reduction is as follows: Permuting up cuts results in an application of the cut-rule on shorter derivations. The top-rightmost sequent in the right-hand side derivation is deduced via the heightpreserving invertibility of the ∧ L rule and the fact that Γ , F ∧ G, A ⊢ s(m) B is provable. Similar reductions are possible when the cut formula A is not principal in the right premise of the cut-rule. If the height of the premises in the above derivation is frozen, it is possible to extend the theory R S with two new rules that exactly mimic the behavior of replacing a non-principal cut with a smaller one. Those rules are: where cn (resp., cm) is the frozen constant resulting from n (resp., m) and cA the ground term representing the cut-formula. The first rule reflects the principle behind the above reduction where the height of the left premise of the cut-rule is reduced. The second rule reflects the case where the cut-formula is not principal in the right premise. Note that the first rule cannot be applied directly on the sequent Γ , F ∧G ⊢ B: the left premise Γ , F ∧ G ⊢ A is provable with height s(n) but, not necessarily, with height n. Similarly for the second rule and the right premise of the cut-rule.

Cut Elimination
The scenario is ready for establishing the necessary conditions for cut-elimination. The specification of the additive cut-rule for the system G3ip can be written as This rewrite rule has an extra variable (A) in the right-hand side and cannot be used for execution (unless a strategy is provided). Hence, the attribute nonexec identifies this rule as no executable. In the following, lcut (resp., rcut) is used to denote the term Gamma |--A (resp., Gamma, A |--C) whose set of variables is {Gamma, A} (resp., {Gamma, A, C}). Moreover, hcut denotes the head/conclusion of the cut-rule, i.e., the term Gamma |--C.
As already illustrated, admissibility results and invertibility lemmas are usually needed in order to complete the proof of cut-elimination. Such auxiliary results are specified in the analysis, respectively, as H and I . As explained in Sections 5.1 and 5.2, a rule in H encodes an admissible rule of the system that can be used, in a backward fashion, during the proof-search procedure. Moreover, a rule in I is used to close the assumptions under the invertibility results. Given a logical system S , every possible derivation of the premises of the cutrule should be considered. This means that there is a proof obligation for each r s , r t ∈ S s.t. r s (resp., r t ) is applied on the left (resp., right) premise (i.e., when r s matches lcut and r t marches rcut). More precisely: Definition 8 (Local cut-elimination) Let S be a sequent system, and H and I a set of rules. Let n : S 1 · · · n : S m s(n) : S r s k : T 1 · · · k : T n s(k) : T r t be inference rules in S . Under the assumptions H and I , the cut rule is admissible relative to r s and r t iff for each θ ∈ CSU(S, lcut) and γ ∈ CSU(T, rcutθ ): where the variables in S and T are assumed disjoint and The rules in S are extended with axioms corresponding to the sequents resulting after the application of r s and r t on the premises of the cut-rule. Note that those premises are added with and without the height annotations, which can be used in applications of inductive hypothesis with shorter derivations and (non-height preserving) applications of the cut-rule over simpler formulas, respectively. Note also that the set of axioms is closed under applications of the rules in I .
The set of rules in ind-F specifies a valid application of the cut-rule with subterms of the cut-formula (t ≺ Aγ). As usual, this rule is assumed to be universally quantified on the remaining variables (t is a ground term). On the other hand, the set of rules in ind-H specifies a valid application of the cut-rule with shorter derivations. As discussed previously, two cases need to be considered: when the left and right premises are shorter. In both cases, the height of the derivation is fixed (nγ) as well as the cut-formula ([Aγ/A]).
Regarding the base-cases of the induction, if the cut-formula is a constant or an atomic proposition, the set ind-F is empty. If the rule r s is an axiom, then the set n : S j γ, S j γ | j ∈ 1..m is also empty. In this case, an attempt of proving hcutγ by starting with the first rule in ind-H leads to a goal of the form nγ : lcut where no inference rule r ∈ S can be applied: nγ is a constant of the form %1 and it does not unify with the annotation s(n) in the conclusion of r. Hence, if r s is an axiom, a proof of hcutγ cannot start with ind-H. A similar analysis applies for r t .
The cut-elimination proof needs to consider all the possible matchings of the rules of the system and the premises of the cut-rule.
Theorem 4 Let S be a sequent system, and H and I be set of rules. If for each r s and r t ∈ S the cut-rule is admissible relative to r s and r t under the assumptions H and I , then the cut-rule is admissible in S (relative to H and I ).
Proof Consider the annotated cut-rule below (a similar analysis applies for the other cut-rules introduced in Section 8): Assume that there exists a derivation of the premises starting, respectively with the rules r s and r t . By the hypothesis and Definition 8, there is a valid derivation of the sequent in the conclusion and the result follows.

Identity Expansion
The identity axiom states that any atomic formula is a consequence of itself. It has a dual flavor w.r.t. the cut-rule, in the sense that, while in the cut-rule a formula is eliminated, in the identity axiom an atomic formula is introduced. In G3ip, I represents the general schema for the identity axiom (see Fig. 1), where p is an atomic formula. In Section 6.1, the cut-rule was proved admissible relative to some assumptions. The correspondent dual property w.r.t. the identity axiom is the identity expansion: assuming I, is any well founded formula a consequence of itself? Or equivalently: Is the general identity axiom -not restricted to atoms -admissible? For example, in G3ip, proving identity expansion requires proving the admissibility of the axiom I A for an arbitrary formula A: Observe that the cut/identity duality is also reflected during (bottom-up) proof-search: while applications of the cut-rule should be avoided because arbitrary formulas need to be produced "out of thin air", applications of general identity axioms are most welcome, since they may make the proof smaller and proof-search simpler. The proof of identity expansion proceeds by induction on the formula A. Consider a constructor C = f ( t) of type S 1 , ..., S n − → Formula and let C be the ground term f ( t) where each t i ∈ t is replaced by a fresh constant t i of sort S i . Hence, the goal is to prove Theorem 5 Let S be a propositional sequent system and R S , with signature (S, ≤, F), be the resulting rewrite theory encoding S . Identity expansion holds in S iff for each In one-sided systems, the goal is to show that ⊢ A, A ⊥ is provable for any A, where A ⊥ denotes the dual of A (see Section 8.4 for the definition of duality). Theorem 5 can be easily adapted to the one-sided case.

Reflective Implementation
This section details the design principles behind the L-Framework 3 , a tool implementing the narrowing procedures described in Sections 5 and 6. The L-Framework receives as input the object logic sequent system (OL) to be analyzed, as a rewrite theory R S , and the specification of the properties of interest. Then, it outputs L A T E X files with the results of the analyses of, e.g., the proof reductions needed to establish cut-elimination. The specification of the OL follows as in Section 4 (details can be found in Section 7.1). As explained in Section 7.2, the implementation of the algorithms heavily relies on the reflective capabilities of rewriting logic. Moreover, the specification of the properties for each kind of analysis follows a similar pattern where a suitable functional theory needs to be instantiated (Section 7.3). The subsequent sections offer further details about the implementation of each kind of analysis: admissibility of rules (Section 7.4), invertibility of rules (Section 7.5), cut elimination (Section 7.6), and identity expansion (Section 7.7).
For readers interested in the details of the implementation, pointers to the Maude files and definitions are given in this section. However, readers interested only in the results of the analyses can safely skip this section; several examples and instances of the definitions presented here can be found in Section 8.

Sequent System Specification
The starting point for a sequent system specification is the definition of its syntax and inference rules. The file syntax.maude contains the functional module (i.e., an equational theory and no rewriting rules) SYNTAX-BASE. Such a module defines the sorts Prop and Formula, as well as the subsort relation Prop < Formula. No constructors for these sorts are given since those depend on the OL and hence must be provided. The sort MSFormula, for multiset of formulas, is pre-defined here with the constructors presented in Section 4: op * denoting the empty multiset and op _;_ for multiset union. Some auxiliary functions, needed to produce L A T E X outputs, are declared in this module. In particular, the OL may define a mapping to replace symbols of the syntax with L A T E X macros, e.g., This syntax should suffice for most of the sequent-based inference systems. As shown in Section 8, it is also possible to provide more constructors to deal, for instance, with dyadic systems (i.e., one-sided sequents with two separated contexts).
---height annotations is added to the specification. The sort INat (file nat-inf.maude) extends the natural numbers expressed in Peano-like notation as s n (z) with a constant inf denoting "unknown". This constant is used, e.g., in the cut-elimination procedure where structural cuts do not preserve/decrease the height of the derivation and, therefore, the resulting sequent does not have any (known) measure.
OLs are also allowed to define equations to complete the definition of the mapping TEXReplacementSeq that replaces the name of the rules with suitable L A T E X symbols: eq TEXReplacementSeq = ('AndL |-> '\wedge_L), ('AndR |-> '\wedge_R) ...
The sort Goal, the subsort relation Sequent < Goal, the constructors op _|_, and proved (for building list of sequents) are also defined in the module SEQUENT.
The sequent.maude file also specifies the module SEQUENT-SOLVING with auxiliary procedures for building derivation trees and outputting L A T E X code. This module uses reflection heavily (more on this in the next section) in order to deal, in a general and uniform way, with the representation of any sequent regardless of its specific syntax. Moreover, Maude's module LEXICAL is used for converting between strings and lists of Maude's quoted identifiers (terms of the form 'identifier with sort Qid). As shown below, Qids materialize the meta-representation of any term.

Reflection and the Core System
A reflective logic is a logic in which important aspects of its meta-theory can be represented consistently at the object level. In a nutshell, a reflective logic is a logic that can be faithfully interpreted in itself. In this way, the object-level representation can correctly simulate the relevant deductive aspects of its meta-theory. Maude's language design and implementation make systematic use of the fact that rewriting logic is reflective, making the meta-theory of rewriting logic accessible to the user as a programming module [7].
For the purpose of this paper, the focus is on two meta-theoretic notions, namely, those of theory/module and the deductive entailment relation . Formally, there is an universal rewrite theory U in which any finitely represented rewrite theory R can be represented as a term R, including U itself, any terms t, u in R as terms t, u, respectively, and a pair (R,t) as a term R, t in such a way that the equivalence R t → u ⇔ U R, t → R, u holds. Since U is representable in itself, a "reflective tower" can be achieved with an arbitrary number of levels of reflection.
In general, simulating a single step of rewriting at one level involves many rewriting steps one level up. Therefore, in naive implementations, each step up the reflective tower comes at considerable computational cost. In Maude, key functionalities of the universal theory U have been efficiently implemented in the functional module META-LEVEL, providing ways to perform reflective computations.
Additional utilities for manipulating modules and terms in the L-Framework are implemented in the meta-level-ext.maude file. For instance, all the operations on theories described in Section 5 are contained there (some of them are detailed below).
The module META-LEVEL implements the so called descent functions that manipulate (meta-) terms. The function op upTerm : Universal -> Term . returns the (meta) representation t of a term t. For example, constants are represented as Id.Type (e.g., 'proved.Goal), variables as Id:Type (e.g., 'F:Formula), and functions as Id[Params] (e.g., the term '_;_[Gamma:MSFormula,Delta:MSFormula] represents the multiset of formulas Gamma ; Delta).
From a well-formed (meta-) term t it is possible to recover the term t (one level down). Note, however, that not all meta-terms have a suitable representation in the theory in the level below, for instance, '_|_[S:Sequent, F:Formula] is not the image of any valid sequent or formula in the module specifying the system G3ip. The function op downTerm : Term Universal -> Universal takes as parameter the (meta) representation of a term t and a term t ′ . It returns the canonical form of t if it is a term having the same sort of t ′ ; otherwise, it returns t ′ . Usually, t ′ is an error term used to denote that the descent translation was not possible. For instance, if t is expected to be the meta-representation of a formula, then t ′ can be the constant op error : -> Formula .
At the meta-level, modules in Maude (i.e., rewrite theories) are represented as terms with sort Module. The function upModule can be used to obtain such a term. All the components of a module (sorts, functions, equations and rules) have a suitable sort and representation in the meta-level, thus making them first-class citizens. For instance, most of the analyses require to extend the sequent theory with new rules. The function below adds to the module M a set of rules RS: The module META-LEVEL offers functions to perform Maude's operations at the meta-level. For instance, the function metaRewrite allows for rewriting the metarepresentation of a term in a given module and metaVariantDisjointUnify to solve unification problems. Moreover, given a theory R and two terms t, u in that theory, metaSearchPath allows for checking the entailment R t → u by testing whether U R, t → R, u . For instance, in the following assignment: Ans := metaSearchPath(M, SGoal, upTerm(proved), nil, '*, bound-spec, 0) .
the term M is the meta-representation of a sequent system; SGoal is a term representing the goal/sequent to be proved; nil specifies that there are no additional conditions to be satisfied by the solution; '* means that the reduction may take 0 or more steps; bound-spec indicates the maximum depth of the search; and the final 0 is the solution number. The term Ans is a term of type Trace?. It can be failure or a list of trace steps showing how to perform each of the reductions t → r t ′ where the rule r is applied to t leading to t ′ . It is worth noticing that metaSearchPath implements a breadth-first search strategy. Hence, if SGoal can be rewritten into proved with at most bound-spec steps, then metaSearchPath will eventually find such proof (if the executability conditions for the rewrite theory are met [8]). Moreover, since Ans is a proof term evidencing how to prove the rewriting SGoal * →proved, it can be used to rebuild the needed derivation of SGoal in the sequent system at hand.

The General Approach for Implementing the Algorithms
All the analyses implemented in L-Framework are instances of the same template: 1. A module interface (called theory in Maude) specifies the input for the analysis.
This theory includes parameters such as: the name of the module implementing the OL; the specification, as a rewrite rule, of the theorem to be proved; the extra hypotheses (set of rewrite rules) corresponding to the already proved theorem; the bound for the search depth; etc. 2. A module implementing the decision procedures proposed in Sections 5 and 6.
All algorithms follow the same principles: (a) A function generate-cases uses unification to generate all the proof obligations to be discharged. In each kind of analysis, there are suitable sorts and constructors to represent the proof obligations. Normally, the cases include the terms denoting the premises (that can be assumed to be provable) as well as the goal to be proved. (b) Auxiliary definitions to extend the theory with new axioms and the right inductive hypothesis. Take for instance the function inductive-rule explained above. (c) A function holds? that receives as parameter one of the proof obligations, uses the functions in (b) to extend the theory and calls to metaSearchPath to check if the goal can be proved. 3. An extra module providing facilities to produce the L A T E X output.
The following sections give some details about the components (1) and (2) for each kind of analysis. Common facilities for all the analyses are implemented in the theorem-proving-base.maude file, including: converting the term t into the ground term t (i.e., replacing variables with fresh constants) and generating axioms (rules of the form rl [ax] T => proved .) from the assumptions of the theorem.

Admissibility Analysis
The core procedures for automating the proof of admissibility theorems, following the definitions in Section 5.1, are specified in admissibility.maude. The input point is the definition of the functional theory below. Theories in Maude are module interfaces used for declaring parameterized modules. Such interfaces define the syntactic and semantic properties to be satisfied by the actual parameter modules used in an instantiation [7].
The name of the theorem is specified with the string th-name (e.g., "Admissibility of weakening."). The identifier of the module mod-name (e.g., 'G3ip) is used to obtain the meta-representation of the OL module defining the syntax and inference rules of the sequent system. The field file-name specifies the output (L A T E X) file.
The theorem to be proved is specified via a rewriting rule (a term of sort Rule). As an example, the height-preserving admissibility of weakening for G3ip is: Note that the premise and the conclusion have the same height, specifying that if the premise is provable with height at most n so is the conclusion. Since the entailment relation is undecidable in general, all the analyses are performed up to a given search depth (field bound-spec). Hence the procedures are sound (in the sense of the theorems in Sections 5 and 6), but not complete.
As already explained, for modularity, the analyses can depend on external lemmas. Those auxiliary results are specified as already-proved-theorem (backward reasoning) and inv-rules (forward reasoning). Finally, some theorems require mutual induction. For instance, admissibility of contraction for the system of classical logic (Section 8.3) requires a mutual induction on the the right and left rule for contraction. The field mutual-ind specifies the other theorems that can be applied on shorter derivations. For that, if the parameter of type GroundTerm is of the form 's[h.Nat], the mutual-theorem is instantiated with sequents of (constant) height h.
The main procedures implementing the analysis of admissibility of rules can be found in the parametric module ADMISSIBILITY-ALG{SPEC :: ADMISSIBILITY-SPEC}. Consider, for instance, the task of proving the admissibility of a rule rs. Such a rule is specified as the parameter rule-spec in SPEC. For each rule rt of the system, proof obligations are generated by the module's functionality. Following Definition 6, this is done by unifying the premise/body of rs with the conclusion/head of rt: U := metaVariantDisjointUnify (module, getBody(rule-spec, module) =? getHead(rt, module), empty, 0, N) .
Here, N is a natural number used to enumerate the unifiers. By identifying the unifier U, it is possible to obtain the resulting premises when rt is applied on the body/premise of rs (see Equation (2)). For that, the descent function metaXapply enables the application of a rule to a term according to a given substitution. Computing the ground term that substitutes the variables by fresh constants is a routine exercise by following the inductive definition of the sort Term. The cases for admissibility take the form adm-case(Q, M, GTC, GTP, GG), where: Q is the identifier of the rule rt in Definition 6; M is the module implementing the OL, GTC and GTP correspond to the conclusion and the premises in Equation (2), and GG is the goal to be proved ((i : S)θ in Definition 6).
A useful mnemonic that applies from now on: G is for ground, T for term, C for conclusion, P for premise(s), and the last G in GG for goal.
The function inductive takes as parameter the annotated height of the current goal GG. If it is of the form s(h), rule-spec is instantiated with the height h (as explained in the Section 7.2). Otherwise, inductive and mutual-ind return none (an empty list of rules). The function call premises(GTP) generates, for each sequent S in the set of premises GTP, a new axiom rule (S => proved); inv-premises generates further axioms by applying the invertible rules inv-rules in the premises GTP; and already-proved-theorems adds already proved theorems (specified as rules).
From the extended theory M', metaSearchPath is used to check whether GG can reach proved and determine the status of the current proof obligation.

Invertibility of Rules
The core procedures are in invertibilty.maude. The input to the invertibility analysis is specified as a realization of the functional theory INV-SPEC: There is no need to specify the theorem to be proved, because it suffices to take each of the rules of the input module and "flip" it in order to obtain the invertibility lemma to be proved (see Definition 5). This procedure outputs a L A T E X file with the invertibility status of each rule in the system. In the case of rules with two premises, each premise is analyzed separately (see Def. 7). This allows for proving, e.g., that the rule ⊃ L in the system G3ip is invertible only in its right premise (see Section 8.1).
Following the same recipe for admissibility, the proof obligations for invertibility are generated by solving unification problems. Assume that the rule being analyzed is Q and it is to be tested invertible with respect to the rule Q'. If Q is a two-premise rule, then it is split into two different rules to be analyzed separately. Assume that the resulting sliced rule with at most one premise is R. The N-th unifier between the heads/conclusions of the rules is The case to be analyzed is then: inv-case(R, Q', apply(T', U), ---the sequent where R and Q' can be applied applyRule(apply(T, U), Q', module, U), ---applying Q' on the resulting seq. applyRule(apply(T, U), R, module, U)) ---applying R on the resulting seq.
The first two parameters identify the case. The third parameter corresponds to the sequent where both rules can be applied, i.e., the conclusion in Equation (6). The fourth parameter corresponds to the premises after the application of Q', i.e., the premises in Equation (6). The last parameter is the goal being proved, i.e., the premise resulting after the application of R on the same sequent ((k : S l )θ in Definition 7). Call GTC, GTP, and GG to the last three parameters of the case once their variables are replaced with fresh constants. The inductive reasoning consists in applying, when possible, the rule R on the sequents in GTP. By induction, such application of R on a sequent S in GTP must preserve the height annotation of S. Hence, a modified version of R is needed: Note the use of the height T (instead of s[T]) in the resulting rule. Call RI the rule resulting from the application of this function to the rule R being analyzed.
Computing the set of sequents that can be assumed as axioms by induction becomes now a simple task. It suffices to compute one step of rewriting for each sequent GT in the set GTP as follows: metaSearch(MRI, GT, 'G:Goal, nil, '+, 1, k) .
The term MRI inherits all the functional description of the module specifying the OL, but it contains only one single rule, namely, RI. The (ground) sequent GT is rewritten to any possible list of sequents (variable 'G:Goal) in this theory in exactly one step ('+ means one or more steps and the bound in the 6th parameter forces it to be exactly one). The last parameter is used to enumerate all the possible solutions.
Finally, the OL theory M can be extended before attempting a proof of the goal GG of the current case:

Cut Elimination
There are different cut-rules depending on the shape of the sequent system at hand (e.g., one-sided, two-sided, dyadic) and also on the structural rules allowed. The file cut-elimination-base.maude defines common facilities for all the cut-elimination procedures. Impressive enough, minor extensions to this module have been required to implement cut-elimination procedures for the systems in Section 8.
The common interface is the following functional theory: The parametric module CUT-BASE{SPEC :: CUT-SPEC} contains the common definitions. An operator for the cut rule is defined op cut-rule : -> Rule . , but no equation for it is provided. Each OL is responsible for completing this definition. For instance, in G3ip, the cut-rule shares the context in the antecedent of the sequent between the two premises. Hence, the file cut-add-scon.maude (additive cut for single conclusion systems) extends CUT-BASE with the following equation: The conclusion of the rule is annotated with inf (the constant of type INat denoting "don't know"). This specification is nothing else that the meta-representation of the rule in Equation (8). In all the analyses, the cut-formula is expected to be named as FCut$$ and the height of the two premises as h1$$ and h2$$.
The module CUT-BASE offers mechanisms to generate, in a uniform way, the proof obligations for the cut-elimination procedures considered here. This is done in two steps. First, one of the rules of the system identified as Q1 is matched against the first premise of the cut-rule (lcut below): The unifier U must map the variables of lcut to some fresh variables. Hence, before unifying a second rule Q2 against the second premise of the cut-rule, denoted as rcut, the substitution U must be applied on rcut (see Definition 8): U' := metaVariantDisjointUnify(module, apply(rcut, U) =? getHead(Q2, module), empty, Nvars, M) .
Induction on the height of the derivation is also defined in CUT-BASE in a general way and independently of the cut-rule. The syntax x <-t is used to denote the substitution [t/x]. This specification implements ind-H in Definition 8. The first two parameters are the height of the left and the right premises above the cut. The last parameter is the cut-formula considered in the current goal.
Induction on the structure of the formula is partially defined in CUT-BASE, but additional work is needed in the specific OL. General definitions include, e.g., The first equation applies to constructors for formulas (e.g., '_/\_[A, B], representing A ∧ B) and then, induction applies on all the sub-terms in the list of ground terms LGT (if they are of sort Formula). If the parameter is a constant, e.g., 'False.Formula, only the second equation applies and no inductive rule is generated. The keyword owise (a shorthand for otherwise) means "use this equation if all the others defining the corresponding function symbol fail". This is syntactic sugar to simplify the specification and it can be encoded in plain conditional equational theories [8].
The function $induct-struct calls to induct-struct-formula for each term in LGT. The definition of induct-struct-formula is specific for each OL. Here the one defined in cut-add-scon.maude: The parameter GTA is a ground term denoting a proper sub-formula of the cut-formula. The cut-rule is instantiated as follows: the head of the rule is unchanged, the height of the premises is 'inf.INat, and the cut-formula is fixed to be (the ground term) GTA. The change in the height of the premises is due to the fact that cuts on smaller formulas do not preserve necessarily the height of the derivation.
The above definition seems to be general enough for any cut-rule and, in theory, it would be possible to define it once and for all the systems. However, the generated rule is problematic from the point of view of proof search. Note that both premises are annotated with 'inf.INat. Hence, even if the rule does not need to "guess" the cut-formula (since it is fixed to a ground term A), it is always possible to rewrite the goal Γ ⊢ G into Γ , A ⊢ G and later into Γ , A, A ⊢ G, etc. For this reason, in some systems, extra conditions are needed to restrict the application of this rule and reduce the search space (more about this in Section 8). Of course, a bad choice of such conditions may render the analysis inconclusive. A natural rule of thumb that worked in most of the cases was to restrict the application of this rule when the sub-formula A is not already in the context Γ .
Finally, the main search procedure must be also tailored for each OL. There is a template that, with little modifications, can be used in all systems reported here. In particular, OLs may define different conditions for the application of the rules with the aim of reducing the search space. The main definition in cut-add-scon.maude is: In the code above, RS is the set of rules used to extend the theory of the OL before calling the search procedure. This extension is similar to the ones presented for admissibility and invertibility. There are, however, two new ingredients: (a) the way the axioms are generated and (b) the additional condition deciding whether the rule for structural induction is added or not to solve the current goal GG:.
(a) premises-W(S) converts into an axiom the sequent S. Unlike premises(S) used in the previous sections, the resulting rule in premises-W(S) internalizes weakening: if S is the (ground) sequent Γ ⊢ F, then ∆ ⊢ F is provable for any ∆ ⊇ Γ . This avoids the need for adding the rule W into already-proved-theorems, thus reducing the search space. This simplification cannot be used, of course, in substructural logics, such as linear logic (Section 8.4). (b) As already explained, the rule for structural induction is problematic from the point of view of proof search. In some OLs it is possible to control its use. For example, in G3ip, the application of ind-F can be restricted to solve only the principal cases (and the non-principal cases will run faster). Such cases can be identified by counting the number of constants of type Formula in the current goal GG (see the last line in the code above). In other systems, however, this simplification does not work as in the case of the system mLJ (Section 8.2) where structural induction is also needed in some of the non-principal cases.
Wrapping up, configuring the cut-elimination procedure requires adjusting and tuning some parameters. As shown in Section 8, some logics that share the same structural properties may reuse a common infrastructure. For instance, the cut-rule and procedures defined in file cut-add-mcon.maude (two-sided and multi-conclusion systems where weakening and contraction are allowed) can be used to prove cutelimination for the system G3cp (Sec. 8.3), mLJ (Sec. 8.2), and some systems for modal logics (Sec. 8.5). However, cut-elimination is a non-trivial property and hence, full automation is impossible for 'the' general case. In each of these systems, the user must determine the invertibility lemmas that will be considered during the search procedure. This is done by simply modifying the input parameter inv-rules in CUT-SPEC.

Identity Expansion
This analysis uses the following functional theory as interface (id-expand.maude): Given a ground term F denoting a formula, the call goal(F) returns the sequent to be proved. This definition allows to consider id-expansion theorems for one-sided and two-sided systems. The last field is used to define other sorts that need to be considered in the analysis. For instance, the specification of modal logics includes the sort BFormula < Formula for boxed formulas (see Section 8.5). By adding BFormula in types-formula, the case F ⊢ F is generated.

Case Studies
This section explains how structural meta-properties of several propositional sequent systems can be specified and proved with the approach presented in this paper. The site hosting the L-Framework 4 includes all the logical systems described here, the proof-search implementation of the strategies, and the PDFs generated by the proofsearch algorithms. The chosen methodology for proving the (meta-)theorems in this section is modular: first, it attempts to build a proof without any external lemma; second, when needed, it analyzes the failing cases and adds already proved theorems for completing the proof. This methodology allows for analyzing interdependencies between the different results inside various logical systems. Since there are no similar tools to compare the performance of the L-Framework, the main benchmark pursued is to show that it is flexible enough to deal with several proof-theoretic properties with little effort in defining the object logic and the properties of interest. In all the cases, but those reported in Section 8.4 (where the proof of cut-elimination requires induction on two different rules), implementing the analyses amounts only to instantiate the interfaces/theories described in Section 7.
In order to show the feasibility of the L-Framework, time-related observations are reported. In all the cases, the depth bound (bound-spec) is set to 15 and the experiments are performed on a MacBook Pro, 4 cores 2.3 GHz, and 8GB of RAM, running Maude 3.0. On average, the admissibility analyses are completed in about 5 seconds. Once all the auxiliary lemmas are added, the time needed to complete the proof of cut-elimination depends on whether explicit structural rules are used (about 30 seconds) or not (about 15 seconds).

System G3ip for Propositional Intuitionistic Logic
The system G3ip and its specification as a rewrite theory are presented in Section 5. The proof of meta-theoretical properties is next described in detail.
Weakening. Theorem 1 (see rule H in Equation (5)) and height-preserving admissibility of weakening can be proved without auxiliary lemmas.
Proof See the specification of the theorems in g3i/prop-W.maude and g3i/prop-H.maude. The resulting proofs are in g3i/g3i.pdf. Both properties are proved in less than 1 second without any additional lemma.

⊓ ⊔
Invertibility. All the rules in G3ip are invertible, but ∨ R i and ⊃ L . However, the tool initially fails (in 7.6s) to prove the invertibility of ⊃ R , ∧ R , ∧ L , ∨ L , and ⊤ L . Consider the rule ⊃ R . Recall that the invertibility of a rule is proven by testing the local invertibilities relative to all possible rules (see Definition 7). Hence, the tool proves, e.g., that ⊃ R is invertible w.r.t ∧ L (the symbol • denotes successor): Note the application of the inductive hypothesis on the shorter derivation of height h 3 . All the other cases of local invertibility of ⊃ R are similar, but the cases w.r.t. ⊃ R and ⊃ L fail. As shown below, when the admissible rules H and W (Theorem 6) are added to the set of already-proved-theorems, the tool completes the missing cases. The failure w.r.t. ⊃ R is: This is the dummy case where the same rule is applied on the same formula and the proof transformation should be trivial. In fact, an application of H completes the case: If ⊃ L is applied on the sequent ∆ 6 , F 1 , F 4 ⊃ F 5 ⊢ s(h 3 ) F 2 , two premises are obtained: The second premise is provable by induction. The proof of the first premise requires weakening on F 1 : Once W is added, the proofs of invertibility of ∧ R , ∧ L , ∨ L and ⊤ L are also completed. Some cases are vacuously discharged. For instance, there are no proof obligations for the invertibility of ⊃ R w.r.t. ∧ R (it is impossible to unify the conclusion of these rules). Moreover, the cases of the axioms I, ⊤ R , and ⊥ L are trivial since the only proof obligation is to reduce proved into proved: The rules ∨ R 1 , ∨ R 2 , and the left premise of ⊃ L are clearly not invertible. The following failures provide good evidences that these cases do not succeed: Theorem 7 (Invertibility) All the rules, but ∨ R i and ⊃ L , are height-preserving invertible in G3ip. Moreover, the right premise of ⊃ L is height-preserving invertible.
Proof The invertibility of ∧ R , ∧ L , ∨ L , and ⊤ L depends on the admissibility of W (Theorem 6). The invertibility of ⊃ R and the invertibility of the right premise of ⊃ L require Theorem 6 (W and H). The specification of the property is in prop-inv.maude. The analysis is completed in 7.7 seconds.

⊓ ⊔
Contraction. When attempting a proof of admissibility of contraction, the local admissibility cases w.r.t ⊃ L , ∧ L , ∨ L , and ⊤ L fail. Here is the failing case for ∨ L : Note that the inductive hypothesis cannot be used neither on the left nor on the right premise. After adding 'OrL to the set of already-proved invertible rules (field inv-rules), the L-Framework completes this case as follows: Due to unification, there are indeed two cases: one in which the disjunctive formula is contracted and one of the copies is principal, and another case with the disjunction not being contracted (instead, another formula F 6 is). The second case follows without using the invertibility lemma. (1)) is height-preserving admissible in G3ip.

Theorem 8 (Contraction) The contraction rule C (Equation
Proof The cases ∨ L , ∧ L , and ⊤ L require the invertibility of the respective rules (Theorem 7). The case ⊃ L requires invertibility of the right premise of this rule (specified in inv-rules as 'impL$$1). The proof takes 1.3 seconds. ⊓ ⊔ Cut-Elimination. Controlling the structural rules is one of the key points to reduce the search space in sequent systems. The system G3ip embeds weakening in its initial rule (Γ can be an arbitrary context) and contraction in rules with two premises (all rules are additive and contraction is explicit in the left premise of ⊃ L ). Thus, W and C are admissible in G3ip, and the proof search procedure does not need to guess how many times a formula must be contracted or whether some of them need to be weakened. The cut-rule considered for G3ip is additive (see Equation (8)), hence also carrying an implicit contraction. The cut-elimination procedure for G3ip implements two strategies for reducing the search space (see Section 7.6): 1. If GTP is the resulting set of sequents that can be assumed to be provable, then such sequents are added as axioms that internalize weakening. For instance, if Γ ⊢ F is a (ground) sequent in GTP, then, ∆ ⊢ F is assumed to be provable whenever ∆ ⊇ Γ . This avoids the need for adding the rule W to the set of already proved theorems, thus reducing the non-determinism during proof-search: weakening is "delayed" until the leaves of the derivation are reached. 2. The inductive hypothesis on the structure of the formula is only used in the principal cases. Hence, less alternatives are explored when proving the non-principal cases.
The strategy (1) considerably affects the performance in both failing and successful attempts as explained below. The strategy (2) saves few seconds when all the needed auxiliary lemmas are added and the proof succeeds. In failing attempts, this strategy has an important impact.
Due to (1), W and C are not added to already-proved-theorems and, for the moment, consider also that none of the invertibility lemmas is added. This experiment leads to proofs for the trivial cases ( ⊤ R , ⊥ L , I and ⊤ L ) and fails for the other rules in almost 15 seconds 5 .
Non-principal Cases. Usually the non-principal cases are easily solved by permuting down the application of a rule and reducing the height of the cut. Some of this cases are already proved in this first iteration. For instance, the case (∧ L , ∨ 1 ) -∧ L applied on the left premise and ∨ 1 on the right premise -is solved as follows: In the right derivation, hCut is an application of the cut-rule with shorter derivations. Moreover, ax/W finishes the proof due to the sequents assumed to be provable (on the left derivation), possibly applying W. In this particular case, weakening is not needed. Some other similar cases, however, fail. Take for instance the case (∧ L , ∧ L ): What is missing here is the invertibility of ∧ L on the assumption ∆ 12 , F 9 , F 10 , F 6 ∧ F 7 ⊢ h 8 F 11 . If this invertibility lemma is added, the tool completes the case: Inspecting similar failing cases suggests the need for including also the invertibility of the rules ∧ R and ∨ L , and also the invertibility of the right premise of ⊃ L . This solves some missing cases but still, the cases for ⊃ L , ∧ L and ∨ L are not complete. One of the failures for (⊃ L , ⊃ L ) is the following: The cut-formula is F 11 and it is not principal in any of the premises. Once ⊃ L is applied on the goal ∆ 10 , F 7 ⊃ F 8 ⊢ F 9 , the resulting left premise is already proved (see the left-most sequent in the left derivation). The right premise can be proved with H (Theorem 6) if it is added to already-proved-theorems: Principal Cases. Note that in all the above derivations, due to unification, there is always more than one constant of sort Formula in the goal: besides the formula in the succedent of the sequent, there are formulas in the antecedent that are needed for the application of a left rule in the left premise of the cut. Due to the strategy (2), cuts on smaller formulas are not considered during the proof search for these cases. The situation is different in the principal cases. Consider for instance the case (⊃ R , ⊃ L ): Rule sCut corresponds to a cut on a sub-formula. Note that the antecedent of the the goal is just a constant of sort MSFormula (∆ 9 ). (8) is admissible in G3ip.

Theorem 9 (Cut elimination) The cut-rule in Equation
Proof See the specification and dependencies in g3i/prop-cut.maude. The proof requires Theorem 6. All the cases -except for ⊤ R , ⊤ L , ⊥ L , and I-require Theorem 7. The proof is completed in 13.8 sec.

⊓ ⊔
Multiplicative Cut in G3ip. Observe that, by adopting the additive version of the cut-rule, several common proof search problems are avoided, e.g., loops created by the uncontrolled use of contraction. But what if the following multiplicative cut is considered in G3ip instead, where the context is split between the two premises?
The search tree now is considerably bigger since all the alternatives on how to split the context Γ , ∆ need to be considered (see cut-mul-scon.maude). This is an interesting question, and the discussion presented next will serve to pave the way to the analysis of sequent systems with linear contexts (see Section 8.4). Note that, if only the rule H (Theorem 6) is added, then all the cases but the principal cases for ⊃ and ∧ succeed in 26 seconds. The failure on (∧ R , ∧ L ) is: This case is solved by cutting with F 6 and F 7 . However, since the cut-rule is multiplicative, the contexts ∆ 2 and ∆ 9 need to be contracted first. Adding contraction, on terms of sort MSFormula makes infeasible the proof search procedure: any subset of the antecedent context can be chosen for contraction and such rule can be applied on any sequent/goal. Instead, a more controlled version of contraction can be added: contract the whole context only if there are no duplicated elements in it. This more restricted rule cannot be applied twice on the same goal, thus reducing the number of alternatives leading to the following proof transformation: Proof The specification is in prop-cut-mul.maude. See the contraction rule used in the definition of already-proved-theorems. No invertibility lemma is needed for this proof. The analysis is completed in 25.6 seconds.

⊓ ⊔
If W is not embedded in the initial axioms and C is allowed on arbitrary contexts, there is little hope to conclude these proofs in reasonable time.
Identity Expansion. Finally, the dual property of cut-elimination, identity expansion, is easily proved in the L-Framework. Fig. 2: The multi-conclusion intuitionistic sequent system mLJ.
Proof See prop-ID.maude. The rule W needs to be added to the set of already proved theorems. It is used, e.g., in the following case: Maehara's mLJ [22] is a multiple conclusion system for intuitionistic logic. The rules in mLJ have the exact same shape as in G3ip, except for the right rules for disjunction and implication (see Fig. 2). The disjunction right rule in mLJ matches the correspondent rule in classical logic where the disjunction is interpreted as the comma in the succedent of sequents. The right implication, on the other hand, forces all formulas in the succedent of the premise to be weakened. This guarantees that, when the ⊃ R rule is applied on A ⊃ B, the formula B should be proved assuming only the pre-existent antecedent context extended with the formula A. This creates an interdependency between A and B.
Weakening. The proof of admissibility of weakening in mLJ is similar to G3ip, only noting that, in the former, weakening is also height-preserving admissible in the succedent of sequents.
Theorem 12 (Weakening and Weak-height) If mLJ ◮ n Γ ⊢ ∆ , then mLJ ◮ s(n) Γ ⊢ ∆ (H). Moreover, the following rules are height-preserving admissible in mLJ: The three properties are specified in mLJ/prop-WH.maude. No auxiliary lemmas are needed. This theorem is proved in less than 3 seconds.

⊓ ⊔
Invertibility. All the rules in mLJ are invertible, with the exception of ⊃ R .
Proof The specification is in mLJ/prop-C.maude. The proof of admissibility of C L (resp. C R ) requires the invertibility of ⊤ L , ∨ L , ∧ L , and ⊃ L (resp., ⊥ R , ∨ R and ∧ R ). ⊓ ⊔ Cut-elimination and Identity Expansion. In the cut-elimination procedure for G3ip, the application of the rule for structural induction (sCut) is restricted to goals with at most one term of sort Formula (corresponding to the principal cases). That simplification is not possible in mLJ: with that restriction, the cases for ∧ R and ∨ R fail w.r.t. ⊃ R . Here is the case for (∨ R , ⊃ R ): Note that the cut formula F 6 ∨ F 7 is principal on the left premise, but it is not on the right premise. This case cannot be solved by reducing the height of the cut by first applying ⊃ R since this would remove the context ∆ 11 (needed in the left premise). The cut-elimination procedure for mLJ is based on the module defined in the file cut-add-mcon.maude (additive multiple-conclusion) where sCut is added in cases where the goal sequent has at most two terms of sort Formula. Hence, sCut is allowed also in non-principal cases. This solves the previous case and the search procedure finds the following proof transformation: − → − : ∆ 12 ⊢ ∆ 11 , F 6 , F 7 , F 9 ⊃ F 10 ax/W − : ∆ 12 , F 7 , F 9 ⊢ F 10 inv-th/ax − : ∆ 12 , F 7 ⊢ ∆ 11 , F 6 , F 9 ⊃ F 10 ⊃R − : ∆ 12 ⊢ ∆ 11 , F 6 , F 9 ⊃ F 10 sCut − : ∆ 12 , F 6 , F 9 ⊢ F 10 inv-th/ax − : ∆ 12 , F 6 ⊢ ∆ 11 , F 9 ⊃ F 10 ⊃R − : ∆ 12 ⊢ ∆ 11 , F 9 ⊃ F 10 sCut Theorem 15 (Cut-elimination and ID-expansion) The following cut rule is admissible in mLJ. Moreover, for any F, the sequent F ⊢ F is provable.
Proof For cut-elimination (prop-cut.maude), H as well as the invertibility of the rules ∧ L , ∧ R , ∨ L , ∨ R , and ⊃ L are needed. For identity-expansion (prop-ID.maude), the admissibility of W R and W L (Theorem 12) is needed. The proof takes 35.2 seconds. ⊓ ⊔

System G3cp for Propositional Classical Logic
G3cp [35] is a well known two-sided sequent system for classical logic, where the structural rules are implicit and all the rules are invertible. The rules are similar to those of mLJ with the exception of ⊃ R : Weakening. Admissibility of weakening of G3cp follows that same lines as in mLJ. Contraction. An attempt of proving admissibility of contraction on the left side of the sequent fails due to ⊃ L (and on the right due to ⊃ R ). Here the failing case for C L :

Theorem 16 (Weakening and Weak
Due to invertibility of ⊃ L , the sequent ∆ 5 ⊢ ∆ 4 , F 2 , F 2 is provable. However, induction does not apply on this sequent since F 2 is on the right side of the sequent. Hence, the proof of admissibility of contraction is by mutual induction on C L and C R . For instance, the case of ⊃ L is solved by applying C R on a shorter derivation: In the second proof transformation, contraction is applied on a formula different from the implication and mutual induction is not needed. Theorem 18 (Contraction) The rules C L and C R are height-preserving admissible in G3cp. Fig. 3: One-sided multiplicative-additive linear logic (MALL).
Proof The specification is in prop-C.maude. The proof of admissibility of C L (resp., C R ) requires the invertibility of ⊤ L , ∨ L , ∧ L , and ⊃ L (resp., ⊥ R , ∨ R , ∧ R , and ⊃ R ). For C L , the specification includes the following definition for mutual induction: This means that C R can be applied on shorter derivations of height GT (due to the pattern "'suc [GT]" in the definition of the equation). Similarly, the proof of C R requires mutual induction on C L (specifically for the case ⊃ R ).
⊓ ⊔ Cut-elimination and Identity Expansion. The proofs of cut-elimination and identity expansion are also similar to the cases of mLJ.
Theorem 19 (Cut-elimination and ID-expansion) The cut-rule in Theorem 15 is admissible in G3cp. Moreover, for any F, the sequent F ⊢ F is provable.
Proof For cut-elimination (prop-cut.maude), H as well as the invertibility of the rules ⊃ L , ⊃ R , ∧ L , ∧ R , ∨ L , and ∨ R are needed. For identity-expansion (prop-ID.maude), the admissibility of W R and W L is needed. The proof of cut-elimination takes 22.5 seconds and id-expansion less than one second. ⊓ ⊔

Propositional linear logic
Linear logic (LL) [14] is a resource-conscious logic. Formulas are consumed when used during proofs, unless they are marked with the exponential ? (whose dual is !), in which case they can be weakened and contracted. Besides the exponentials, propositional LL connectives include the additive conjunction & and disjunction ⊕, their multiplicative versions ⊗ and , and the unities 1, 0, ⊤, ⊥. These connectives form actually pairs of dual operators: where A ⊥ denotes the negation of the formula A. All negations in LL can be pushed inwards and restricted to the atomic scope. First, consider the fragment of LL without the exponentials, that is, (classical) propositional multiplicative-additive linear logic (MALL). The one-sided proof system for MALL is depicted in Fig. 3. As expected, the structural rules for weakening and contraction are not admissible in this system.
Cut-elimination and Identity Expansion. Since the system under consideration is one-sided, the cut-rule has a one-sided presentation. The theory in cut-lin-osided.maude (linear cut, one sided) specifies the operation op dual : Formula -> Formula . and the OL must define equations for it reflecting the De Morgan dualities of the connectives.
Theorem 21 (Cut-elim. and identity-expansion) The rule Moreover, for any formula F, the sequent ⊢ F, F ⊥ is provable.
Proof The notation F ⊥ corresponds to dual(F). The proof of cut-elimination (specification in MALL/prop-cut.maude) relies only on the admissibility of H. Here the principal case (⊗, ): Exponentials. Consider the one-sided system for linear logic obtained by adding the exponential ? and its dual !. The system LL results from the inclusion of the inference in Fig. 4 to those in Fig. 3. Note the explicit rules for weakening and contraction on formulas marked with ?. The specification of the rule ! (called promotion) requires a new sort to guarantee that the context only contains formulas marked with ? (see LL. The rule The cut elimination procedure for this system is certainly more involved. An attempt of proving this result fails for ! and ? C : This is the principal case where the cut formula is !F and it is promoted, and its dual contracted. This case is solved by using the rule below that cuts !F with n copies of the formula ?F ⊥ : Hence, the cut-elimination procedure for this system must mutually eliminate the cut-rules in Theorem 21 and Equation 10. More precisely, the elimination of one of the cases of Cut relies on the application of mCut on shorter derivations and one of the cases in the elimination of mCut requires the application of Cut on smaller formulas. Here are the two relevant cases: In the last derivation, ctr(s n5, ? dual(F4)) denotes (?F ⊥ 4 ) s(n 5 ) and the application of Cut is on a smaller formula (F 4 instead of !F 4 ).
Theorem 23 (Cut-elim. and id-exp.) The following rules are admissible in LL: Moreover, for all formulas F, the sequent ⊢ F, F ⊥ is provable.
Proof The procedures using mutual-induction are defined in LL/cut-ll.maude and LL/cut-ll-cc.maude (for mCut). The properties are specified in prop-cut.maude and prop-cut-cc.maude. Discharging the cases for Cut (resp., mCut) takes 13.2 seconds (resp. 115.6 seconds). In cut-ll.maude, the definition receives as parameter the height of the two premises of the cut and the cut-formula F. If F is not a formula marked with ! or ?, no additional rule is generated (the [owise] case in the definition). Also, the patter matching on the heights of the derivation allows for controlling the application of the cut-rule on shorter derivations. A similar definition can be found in cut-ll-cc.maude.
The elimination of Cut assumes as auxiliary lemmas H and also a generalization of ? W on terms of sort ?MSFormula: As explained in Sec. 8.1, arbitrary applications of contraction are problematic for proof search. Hence, in prop-cut-cc.maude, ? GC is introduced as a conditional rule that can be applied on a given multiset only if there are exactly one occurrence of it in the current sequent. As in the case of ? GW , the admissibility of ? GC cannot be proved in L-Framework. Discharging the cases of Cut (resp., mCut) takes 30s (resp. 124s). ⊓ ⊔ Dyadic System for Linear Logic. Since formulas of the form ?F can be contracted and weakened, such formulas can be treated as in classical logic. This rationale is reflected into the syntax of the so called dyadic sequents of the form ⊢ Γ : ∆ , interpreted as the linear logic sequent ⊢ ?Γ , ∆ where ?Γ = {?A | A ∈ Γ }. It is then possible to define a proof system without explicit weakening and contraction (system DyLL in Fig. 5). The complete dyadic proof system for linear logic can be found in [1].
Proof For the admissibility results, see DyLL/prop-H.maude, prop-C.maude prop-W.maude and prop-WB.maude. The invertibility results are specified in DyLL/prop-inv.maude. H and the admissibility of C and W do not require any additional lemma. The admissibility of W! depends on H. The invertibility results rely on H, W, and W!.
⊓ ⊔ As in the system LL, the cut-elimination theorem for DyLL requires mutual induction on two different rules. The rule Cut! below internalizes the storage of formulas marked with ? into the classical context.
Theorem 25 (Cut-elim. and id-exp.) The following rules are admissible in LL: Moreover, for any formula F, the sequent ⊢ · : F, F ⊥ is provable.

⊓ ⊔
Intuitionistic Linear Logic. The directory ILL contains the specification and analyses for intuitionistic linear logic (ILL). In this system, the multiplicative disjunction is not present and the linear implication −• needs to be added (in classical LL, F −• G is a shorthand for F ⊥ G). The resulting system is two-sided and single-conclusion. Formulas marked with ! on the left of the sequent can be weakened and contracted. The proof of cut-elimination follows the same principles that the one for LL and the multicut rule below is required where !Γ is a multiset of formulas marked with ! (specified as the sort !MSFormula) ∆ 4 Fig. 6: The modal sequent rules for K (k) and S4 (T + 4)

Normal modal logics: K and S4
Modal logics extend classical logic with the addition of modal connectives (e.g., the unary modal connective ), used to qualify the truth of a judgment, widening the scope of logical-conceptual analysis. The alethic interpretation of A is "the formula A is necessarily true". A modal logic is normal if it contains the axiom and it is closed under generalization: if A is a theorem then so it is A. The smallest normal modal logic is called K, and S4 extends K by assuming the axioms (T) A ⊃ A and (4) A ⊃ A. The sequent systems considered here for the modal logics K and S4 are extensions of G3cp with the additional rules for the modalities depicted in Fig. 6.
Structural rules. The admissibility of the structural rules follows as in Section 8.3.
Theorem 26 (Structural rules) If K ◮ n Γ ⊢ ∆ , then K ◮ s(n) Γ ⊢ ∆ (H). Similarly for S4. Moreover, the rules W L , W R , C L , and C R are height-preserving admissible in both K and S4.
Invertibility. As in G3cp, all the rules for the propositional connectives are invertible. Furthermore, T is invertible, but k and 4 are not (due to the implicit weakening). Fig. 6 is invertible.

Theorem 27 (Invertibility) Only the rule T in
Proof See the specification in K/prop-inv.maude and S4/prop-inv.maude. H is required in this proof. Below, one of the trivial cases showing that the rule k is not invertible: Multisets of boxed formulas belong to the sort MSBFormula and the operation op unbox : MSBFormula -> MSFormula . removes the boxes. ⊓ ⊔ Cut-elimination and Identity Expansion. The cut-elimination procedure for K and S4 uses the same infrastructure developed for G3cp.
Theorem 28 (Cut-elimination and ID-expansion) The cut-rule in Theorem 15 is admissible in both K and S4. Moreover, for any F, the sequent F ⊢ F is provable in K and S4.
Proof The specifications are in K/prop-cut.maude and S4/prop-cut.maude. In both cases, H is required as well as the invertibility of the propositional rules. ⊓ ⊔ Modal Logic S5. Some extensions of the modal logic K do not have (known) cutfree sequent systems. In particular, consider the system S5, obtained by extending K with the axiom T and the rule below: (see KT45/KT45.maude). Using the same strategy as in Theorem 28, the tool is able to discard some of the proof obligations for cut-elimination. However, some subcases involving 45 and the other two modal rules (T and k) fail. Here, one example: This case is clearly not provable: the cut-formula F 10 is not decomposed in the left premise. Hence, cutting with F 10 will not finish the proof. The only alternative is to reduce the height of the cut. The rule T cannot be applied (on F 10 ) on the last sequent. If 45 is applied on the last sequent, either F 8 or one of the formulas in the boxed context ∆ 11 will lose the box. In both cases, none of the leaves of the left derivation can be used.
The failing cases are interesting, because they spot the point where the cut-elimination fails: if the goal is to propose a cut-free sequent system for a logic, certain shapes of rules should be avoided.

Concluding Remarks
Checking structural properties of proof systems demands trustful methods. Usually, proving such properties is done via case-by-case analysis, where all the possible combinations of rule applications in a system are exhausted. The advent of automated reasoning has changed completely the landscape, since theorems can nowadays be proved automatically in meta-logical frameworks (see e.g. [10]). This approach has brought a fresh perspective to the field of proof theory: useless proof search steps, usually singular to a specific logic, have been disregarded in favor of developing universal methods for providing general automation strategies. These developments have ultimately helped in abstracting out conceptual characteristics of logical systems, as well as in identifying effective frameworks that can capture (and help in reasoning about) them in a natural way.
This work moves forward towards such a direction: it proposes a general, natural, and uniform way of proving key structural properties of sequent systems by using the rewriting logic logical and meta-logical framework [2]. It ultimately enables modular and incremental proofs of meta-level properties of propositional sequent systems, both specified and mechanized in the language of Maude [8]. The approach builds on top of core algorithms that are combined and used for proving admissibility and invertibility of rules, as well as cut-elimination and identity expansion of sequent systems.
The choice of rewriting logic as a meta-logical framework brings key advantages. Indeed, as detailed below, while approaches using logical frameworks depend heavily on the specification method and/or the implicit properties of the meta and object logics, rewriting logic enables the specification of the rules as they are actually written in text and figures [2,23]. Moreover, notions as derivability and admissibility of rules can be neatly formulated in the context of rewriting systems [15].
Consider, for instance, the LF framework [32] based on intuitionistic logic, where the left context is handled by the framework as a set. Specifying sequent systems based on multisets requires elaborated mechanisms in most logical frameworks, which makes the encoding of the system of interest far from being natural and straightforward. Moving from intuitionistic to linear logic solves this problem [6,26], but still several sequent systems cannot be naturally specified in frameworks based on LL, as it is the case of mLJ. This latter situation can be partially fixed by adding subexponentials [9] to linear logic (SELL) [28,29]. However, the resulting encodings are often non-trivial and difficult to automate. Moreover, several logical systems cannot be naturally specified in SELL, such as the modal sequent system K [31].
A completely different approach is presented in [37], where cut-elimination is obtained by considering the relation between the cut-rule in Gentzen's system LK and the resolution rule for (propositional) classical logic. But, again, this method is restricted to systems having a (classical) semantics as a starting point.
All in all, this paper presents yet more evidence of the fact that rewriting logic is an innovative and elegant framework for both specifying and reasoning in and about logical systems, with the further benefit of easy modular extension. In fact, a strong conjecture is that mild adjustments to the proposed approach and algorithms are needed to reason about other systems, including normal (multi-)modal [19] and paraconsistent [17] sequent systems. It seems also possible to have an extension to handle variants of sequent systems themselves, like nested [4] or linear nested [18] systems. This ultimately widens the scope of logics that can be analyzed in the L-Framework, such as normal and non-normal modal logics [20]. It is also an interesting future research path to consider first-order sequent systems. Previous work on mechanizing first-order sequent systems in rewriting logic, including proof-search heuristics at the meta-level, has been presented before in [33].
A usual concern when a new sequent system is proposed is to implement it. Few implementational efforts have provided tools for emerging sequent systems for logics such as epistemic, lax, deontic, knotted, linear-modal, etc., logics. It would not require much effort to reuse some of the algorithms presented here to implement a procedure that, given the inference rules of a sequent system, outputs an implementation of such a system in Maude. Also, implementing new induction principles on lists/multisets may help in obtaining new automatic proofs (see Theorem 23). More interestingly, using invertibility lemmas would also enable the generation of a weakfocus [1] version of the original systems, thus eliminating part of the non-determinism during proof-search. It should be noted, however, that this depends on a deeper investigation of the role of invertible rules as equational rules in rewriting logic. While this idea sounds more than reasonable, it is necessary to check whether promoting invertible rules to equations preserves completeness of the system (e.g., the resulting equational theory needs to be Church-Rosser and terminating modulo the structural axioms of the operators). If the answer to this question is in the affirmative for a large class of systems, then the approach presented here also opens the possibility to the automatic generation of focused systems.
Finally, a word about cut-elimination. The usual Gentzen's cut-elimination proof strategy can be summarized by the following steps: (i) transforming a proof with cuts into a proof with principal cuts; (ii) transforming a proof with principal cuts into a proof with atomic cuts; and (iii) transforming a proof with atomic cuts into a cut-free proof. While step (ii) is not difficult to solve (see, e.g., [26]), steps (i) and (iii) can be problematic to mechanize. The results presented in this work suggest that such techniques can be fully automated to a certain degree, as showcased with various proof systems in Section 8.