A Logic of Separating Modalities

We present a logic of separating modalities, LSM, that is based on Boolean BI. LSM’s modalities, which generalize those of S4, combine, within a quite general relational semantics, BI’s resource semantics with modal accessibility. We provide a range of examples illustrating their use for modelling. We give a proof system based on a labelled tableaux calculus with countermodel extraction, establishing its soundness and completeness with respect to the semantics


Introduction
The concept of resource is important in many fields of enquiry -including, among others, computer science, economics, and security.In recent years, mathematical work in logic has begun to analyse the concept of resource in quite systematic and quite useful ways, with computer science providing a rich source of motivations and examples.
One impetus for this work was provided by the so-called resource interpretation of Girard's Linear Logic [19], in which the number of occurrences of a propositional formula in a sequent is counted and in which the exponentials are used to provide countably infinitely many copies of propositional formulae.An alternative approach -inspired, on the one hand, by a long semantic history in relevant logic (e.g., [34,11]) and, on the other, by work in the semantics of type theories -is exemplified by O'Hearn and Pym's Logic of Bunched Implications (BI) [30,26,33,16,17].In BI, the concept of resource resides in an interpretation of BI's semantics: this approach, and its developments, is known as resource semantics.
Conceptually, resource semantics begins with a simple axiomatization of resource.Starting with a given homogeneous set of resource elements -for example, bags of fruit, units of currency, or computer memory -we expect the following properties: -to be able to combine two units of the given type of resource to form a new unit of that type of resource; -to be able to compare (using either a simple equality or an ordering) two units of a given type of resource; -that combination and comparison should be appropriately compatible.
This basic axiomatization has proved remarkably robust, supporting, for example, a good deal of work in Separation Logic and its precursors and developments, [11,22,28,35,27], and a vast subsequent literature.Mathematically, this basic set-up is captured by a pre-ordered monoid of resources, defined as follows: R = (R, , •, e), where R is a set of resource elements, is a pre-order (writing = for ∩ ) and • is a monoidal composition with unit e, subject to the functoriality coherence condition that if r = s and r = s , then r • r = s • s [30,26,33,17].
The semantics of (Boolean) BI is given using a satisfaction relation between resources and propositional formulae, with cases such as In terms of resource semantics, the additive conjunction (∧) is simply interpreted as specifying that the conjuncts must share the available resources whereas in the case of multiplicative conjunction ( * ) the available resources must be divided between the two conjuncts.Similarly, in the multiplicative implication (− * ), the resources required to support the implicational formula must be combined with those required to support the 'input' formula in order to obtain, by implication, the resources required to support the 'output' formula.
We can also work with intuitionistic BI, with its intuitionistic additives, as in [30,33,16,17], by considering a monoid of resources that carries not merely an equality but a pre-order, allowing intuitionistic implication to be defined in the usual way and leading to the multiplicative conjunction r |= φ 1 * φ 2 iff there exist r 1 and r 2 such that r 1 • r 2 r and r 1 |= φ 1 and r 2 |= φ 2 In this case, the functoriality condition is that if r s and r s , then r • r s • s .The dynamics of systems is a central concern in computer science.Many models and logics have been proposed in order to capture system behaviours and reason about their properties.In particular, modal logics based on S4 or S5 and their intuitionistic variants [2,36] and temporal logics such as LTL [31] or CTL [12].The interest in such logics derives from their ability to express properties such as invariance (is a property satisfied in all reachable states of the system?) and reachability (is it possible to reach a state satisfying a property?).
Modal extensions of BI have been proposed in order to introduce dynamics into resource semantics.One of them, called MBI [6,4,5], is a logic in which resources and processes co-evolve according to an operational semantics based on judgements of the form R, E a → R , E , meaning that the process E evolves by performing an action a relative to available resources R so as to become the process E with available resources R .This logic captures the manipulation of resources through the dynamic of a system, but is not able to express properties relative to quantified actions (e.g., properties deriving from performing any action).MBI's purely logical theory remains relatively undeveloped.Nevertheless, the use of these ideas as a basis for a rigorously resource-based modelling tool has been described in [7,5].
Another modal extension of BI, called DBI, introduces a simple notion of dynamic resource in which properties of resources can change or be modified during the iteration of the system [8].The modalities of DBI (♦ and ) allow the expression of properties of resources at any reachable state.Moreover, there exists a sound and complete calculus with a countermodel extraction method for this logic.DBI is not able to capture resource manipulations by a system: its models capture systems that modify properties of resources, but not systems that produce and consume resources.
In this paper, we present a modal logic of resources -LSM, for 'Logic of Separating Modalities' -that is based on Boolean BI's resource semantics.The logic extends S4.The basic idea is to work with two-dimensional worlds (w, r) that correspond to the purely modal and purely resource components of the semantics.The key development derives from their combination to define resource-modalities ♦ r and r in which 'modal truth' is offset by 'resource truth'.These modalities generalize their counterparts in S4 (♦ and ).In Section 2, we introduce the language and the semantics of LSM, using a quite general relational formulation.In Section 3, we illustrate the expressiveness of its modalities thorough a range of core examples from computer systems.Then, in Section 4 , we develop an extended example, showing that LSM provides useful tools for reasoning about a rich model of concurrent computation: in particular, we show that LSM is able to express directly and conveniently properties of timed Petri nets [24,1].In Section 5, we place LSM in the broader context of modal logic by establishing, using a straightforward method based on countermodels, that LSM is a conservative extension of the classical modal logic S4.Then, in Section 6, we provide a proof system for LSM as a labelled tableaux calculus with countermodel extraction, in the spirit of similar approaches for BI and Boolean BI [16,17,18].We show its soundness and completeness.Finally, in Section 7, we summarize our contribution and discuss a range of directions for further work, including both purely logical aspects and applications to program analysis and verification in the spirit of the work of Ishtiaq, O'Hearn, and Reynolds on Separation Logic [22,35].

A logic with separating modalities, LSM
We establish a development of BI's resource semantics [30,33,16,17,6,4,5] that is capable of defining a quite general notion of modality.
Let Prop be a countable set of propositional symbols and Σ R be a countable set of resource symbols.The language L Σ R of LSM is defined as follows, where p ∈ Prop and r ∈ Σ R : Let us note that I (resp., ⊥) is the unit of * (resp.∧, ∨).Moreover We call • the resource composition and e the unit resource.
R is called a reachability relation and V is called a valuation.
Note that the interpretation − is a partial function such that e = e.Henceforth, we abuse notation and write r for r , neglecting further mention of − (this is the default approach in process logics, such as Hennessey-Milner logic [21,25,37]).Moreover Definitions 1 and 2 ensure the necessary coherence between modal accessibility and resources.

Definition 3 (Satisfaction relation, validity)
is defined by structural induction, for all w ∈ W and all r ∈ Res, as follows: We say that a formula φ is valid, denoted φ, if and only if, for all worlds w and all resources r in all models K, w, r K φ.We write φ ψ if and only if, for all worlds w and all resources r in all models K, w, r K φ implies w, r K ψ.
We emphasize that, suppression of the distinction between s and s notwithstanding, it is not supposed that Σ R ⊆ Res.The judgement w, r K ♦ s φ is defined only if r • s ↓ and then only if s ∈ Res.In other words, we consider that the meaning of w, r K ♦ s φ is: 's is a resource (such that s ∈ Res) which can be composed with r (r • s ↓) and if we compose these resources, then the system can reach a world w and a resource r ((w, r • s)R(w , r )) satisfying φ (w , r K φ).
The language of LSM can be extended with the two modalities described in Section 1.
Definition 4 (Additional modalities).For a given Σ R , the language L Σ R can be extended as follows: The satisfaction relation given in Definition 3 can be extended to define these additional modalities.
Definition 5 (Satisfaction for the additional modalities).The satisfaction relation of the additional modalities of Definition 4 is defined by the following extension of Definition 3: w, r K ♦φ iff there exist w ∈ W and r ∈ Res such that (w, r)R(w , r ) and w , r K φ w, r K φ iff for all w ∈ W and all r ∈ Res, if (w, r)R(w , r ) then The pairs modalities ♦ and (S4) and ♦ • and • can both be derived from the modalities ♦ s and s .For any φ, ψ in some given L Σ R , we write φ ≡ ψ if and only if φ ψ and ψ φ.Lemma 6.The following equivalences hold: Proof.Straightforward applications of the relevant cases of the satisfaction relation.

The expressiveness of LSM
We consider, in this section, some examples that are intended to illustrate the uses and expressiveness of LSM.First, we illustrate the interest and use of LSM's modalities, in the context of systems and security, by considering the mutual exclusion and producer-consumer problems, revisiting examples considered in [6,4,5,9].Then, we consider the relative expressiveness of the three modalities and show that, for example, they allow us to eliminate ambiguities occurring in the expression 'to be able to'.

Mutual exclusion
We consider two processes (P 1 and P 2 ) that are in mutual exclusion.The automaton that describes the behaviour of the processes is given in Figure 1.
The processes have two states: nc, meaning that the process is in the noncritical section; and c, meaning that it is in the critical section.We denote by S = {nc, c} the state set of the processes.
In order to enter into the critical section, a process must hold a token, denoted J, and it releases the token when it leaves the critical section.The processes can perform four actions: a nc a non-critical action, a c a critical action, a p the action that consists in taking a token and a v the action that consists in releasing a token.We denote by A = {a nc , a c , a p , a v } the action set that can be performed by the processes.
We represent the resources (the token J) with M = ({J n | n ∈ N}, +, J 0 ), where J m + J n = J m+n .In other words, J n represents n tokens that are available for the system (the processes P 1 and P 2 ).We remark that M is obviously a PRM.Now, we need a function that captures resource consumption and production when an action is performed.Following the approach taken in [6,4,5], based on an idea first considered for MBI in [32], we define a partial function µ : where ↑ means 'undefined' and ↓ means 'defined'.We remark that performing a critical or a non-critical action (a c and a nc ) consumes and produces no token, releasing a token (a v ) produces a token (J n+1 ) and taking a token (a p ) consumes a token (J n−1 ).Of course, µ(a p , J n ) is defined if and only if there is at least one available token (n 1).We introduce a relation that captures the transitions of a process and their effects on the available resources: s, J n a − → s , J m iff s a − → s is a transition of Figure 1, µ(a, J n ) ↓ and µ(a, J n ) = J m .For instance, we have nc, J 1 ap −→ c, J 0 , but nc, J 1 av −→ c, J 0 does not hold (because there is no transition nc av −→ c in the automaton of Figure 1).This relation is really closed to the spirit of the judgements introduced in the SCRP calculus [6,4,5], which are of the form R, E a → R , E , meaning that a process E performs an action a on a resource R and then provides the resource R and the process E .In order to deal with concurrent transitions, we need to define a set of concurrent states W = {s 1 #s 2 | s 1 , s 2 ∈ S} (where s i is the state of the process P i ), a set of concurrent actions A # = {a 1 #a 2 | a 1 , a 2 ∈ A} (where a i is the action performed by the process P i ) and the following relation: − → s 1 , J m1 and s 2 , J n2 a2 − → s 2 , J m2 .For example, the concurrent state nc#c is a state that captures P 1 in state nc and P 2 in state c.Moreover, the concurrent action a c #a p represents P 1 performing the action a c and P 2 performing the action a p .Concerning the relation =⇒, as nc, J 1 ap −→ c, J 0 and nc, J 0 anc − − → nc, J 0 hold, then we have nc#nc, J 1 + J 0 ap#anc =⇒ c#nc, J 0 + J 0 .Thus nc#nc, J 1 ap#anc =⇒ c#nc, J 0 .We are able to model the behaviour of the processes P 1 and P 2 and the token manipulation using the following LSM model K = (W, M, R, V ), where -R is the reflexive and transitive closure of =⇒, and We illustrate R. As c#nc, J 0 av#anc =⇒ nc#nc, J 1 and nc#nc, J 1 anc#ap =⇒ nc#c, J 0 hold, then (c#nc, J 0 )R(nc#nc, J 1 ) and (nc#nc, J 1 )R(nc#c, J 0 ).By transitive closure, we have (c#nc, J 0 )R(nc#c, J 0 ).Concerning the valuation V , J is the proposition meaning that there is one and only one available token, c i is the proposition meaning that the process P i is in critical section and nc i is the proposition meaning that the process P i is not in critical section.
We consider that the initial state of the system is nc#nc (each process is in non-critical section) and there is only one available token (J).We can obviously express that, in this initial state, each process is in non-critical section and there is only one available token as follows: nc#nc, J K nc 1 ∧ nc 2 ∧ J.
The first important point is that LSM is a modal logic and it is possible to express properties on reachable states and available tokens.For example, we can express that it is impossible that the processes will be together in critical section: nc#nc, J K ¬♦(c 1 ∧ c 2 ) and also that it is always possible that each process can enter in critical section: The second important point is that LSM is a modal logic extended with the resource composition (denoted •) that allows us to express properties of resources on the tokens that are produced and consumed.In particular, we can express that, in any reachable state, it is impossible that there can be more than one available token: nc#nc, J K ¬(J * J * ).It is also possible to express that if one process is in a non-critical section, then there is no available token nc#nc, J K ((c 1 ∨ c 2 ) → I).Indeed, only the unit resource satisfies I and, in our example, this unit resource is J 0 which encodes no available token.
Notice that the formula ¬♦(c 1 ∧c 2 ), with the S4-like modality, fails to capture a vulnerability in the system.This security breach is highlighted by the new modalities: nc#nc, J K ¬♦ • (c 1 ∧ c 2 ).Indeed, if we assume that an intruder introduces one token in our system, then both processes can enter the critical section, because of the presence of a second token: nc#nc, J 1 +J 1 ap#ap =⇒ c#c, J 0 .It follows that we can identify a new solution for the mutual exclusion problem such that nc#nc, J K ¬♦ • (c 1 ∧ c 2 ); that is, such that the processes cannot both enter into the critical section, whatever number of tokens is added.

Producer-consumer
We propose here another example based on the producer-consumer problem, but with a different approach: one in which the set of worlds W encodes the actions that the processes are performing and does not encode the current state of the processes.In this example, we consider two processes: a producer P p and a consumer P c that manipulate resources represented with M = ({R n | n ∈ N}, +, R 0 ), just as in the previous example.
The producer can perform just two actions: p (it is producing a new resource) and np (it is not producing).The consumer can also perform only two actions, which are c (it is consuming a resource) and nc (it is not consuming).Thus W = {p#c, np#c, p#nc, np#nc} is the set of all concurrent actions that can be performed by the processes.
For instance, p#nc means that P p is producing (p) and P c is not consuming (nc).Clearly, only the following transitions hold, for all w ∈ W : We remark that np#c, R n =⇒ w, R n−1 holds only if n 1.Indeed, if there is no resource (R 0 ) and if P p does not produce a new resource (np) then P c cannot consume a resource (c).
Concerning the relation R, we consider the reflexive and transitive closure of =⇒.Like in the previous example, we are able to propose a model for this system, that is In this model, by definition of R and reflexivity, (np#c, R 0 )R(w, R n ) only if w = np#c and n = 0, and we can express that if there is no resource (R 0 ), if P p is not producing a new resource, and if P c is consuming a resource, then the system is blocked (it never changes its state).In LSM, we can express this property as follows, for any w ∈ W and any n ∈ N: w, R n K ((I ∧np∧c) → (I ∧np∧c)).It means that, for all reachable states (pairs of world/resource) and starting from any state, if there is no resource (I) and if P p is not producing a new resource (np) and if P c is consuming a resource (c) then the system always remains in this state ( (I ∧ np ∧ c)).Now, using multiplicative modalities, we can express that it is possible to unblock the system adding a resource as follows:

Expressiveness of the modalities
We consider here the relative expressiveness of the three kinds of modalities, and observe that these modalities eliminate ambiguities concerning the assumptions require to support the expression 'to be able to'.
In this example, we consider three agents that are A 1 , A 2 , and A 3 and also one action act.We suppose that A 1 and A 2 are able to perform the action act, but A 3 is not able to perform it.We consider the set of resources Res = {R n | n ∈ N}, where R n means n occurrences of R, and the resource composition + defined by R m + R n = R m+n .In this example, the agents want to achieve the goal G, which consists in performing the action act.In order to perform this action, the agent A 1 needs no resource and A 2 needs two resources (we recall that A 3 cannot perform the action act).Then, we propose three LSM models, one for each agent, that are, for i ∈ {1, 2, 3}, , where a i is the agent A i in his initial state and G i is A i that has achieved the goal G, M = (Res, +, R 0 ), R i are the reflexive and transitive closure of Now, we consider the agents being in their initial states and trying to achieve the goal without resource.Then the question is: 'which agent is able to achieve G?'.As we observe that -a 1 , R 0 -a 3 , R 0 K3 ¬♦P G , we can see that only A 1 is able to achieve the goal; the other agents are not.We remark also, however, that the question is ambiguous.Indeed, A 2 is also able to achieve G, because it is able to perform the action act, but it needs more resources to do it.
Then, the question of which agent is able to achieve G (whatever the resources provided to the agent) can be viewed as a second meaning of the question.LSM allows us to express this second meaning: We observe that a 3 is not able to achieve G, whatever the quantity of resources provided.Finally, we can be more precise, expressing that A 1 needs no more resource to achieve G but A 2 needs two more resources as follows: In this example, we give three models, one for each agent.An alternative, that might be developed in future work, would be to internalize agents in the syntax of LSM (as a modality parameter) in the spirit of epistemic logics [38,10].Moreover, we will study the relationships of our logic with some propositional dynamic logics [20].

LSM and timed Petri nets
We complete our set of examples of the uses of LSM's modalities by showing that LSM can conveniently express properties of rich models of concurrent and distributed computation; that is, timed Petri nets (TPN) [24,1].This example builds on the spirit of Winskel's work on Petri net semantics for intuitionistic linear logic [13,14] and O'Hearn and Yang's Petri net semantics of BI [29].
Timed Petri nets are a model of computation that can describe distributed systems, concurrency, production and consumption of resources.In these models, resources are represented by places and the consumption and production of resources is captured by transitions.We describe the amount of resources using multisets, a multiset over a finite set P being a function M : P → N. We say that M is a finite multiset iff p∈P M (p) ∈ N. We denote by M P the set of finite multisets over P .

Definition 7 (Petri net).
A Petri net is a 4-tuple P = (P, T, pre, post) such that P is a finite set of places, T is a finite set of transitions, and pre and post are two functions T → M P .
The markings are denoted [p 1 , . . ., p n ], where p i are places.For instance, the marking M = [p 1 , p 2 , p 2 ] is the function such that M (p 1 ) = 1, M (p 2 ) = 2 and M (p i ) = 0 for all p i ∈ P \ {p 1 , p 2 }.In this example we can say that there are two tokens in the place p 2 and one token in p 1 .[] is the empty marking, that is [](p) = 0, for all p ∈ P .
The marking addition M + N is defined by When a transition t i is fired, resources are consumed, given by pre(t i ), and also produced, given by post(t i ).We denote by M ti − → N when the marking (the resources) M , after firing the transition t i , becomes the marking N .Thus, we have We say that the transition t i is enabled for the marking M iff pre(t i ) ≤ M .Sometimes, when we considered implicitly a marking M , we will say that t i is enabled, rather than t i is enabled for M .Considering a marking M , we denote by T /M the set of all transitions that are enabled for M .

Definition 8 (Timed Petri net).
A timed Petri net is a 6-tuple T = (P, T, pre, post, α, β) such that (P, T, pre, post) is a Petri net, α : T → R + and β : T → R + ∪ {∞} Timed Petri nets, denoted TPN, are particular Petri nets in which each transition t i has an associated time interval [α(t i ), β(t i )].These intervals capture the delay and duration of transition firing.For instance, if the interval [2,5] is associated with the transition t i , then it means that if t i becomes enabled at time θ and t i stays continuously enabled then t i may be fired after time θ + 2 and must be fired before time θ + 5. Thus, in order to capture time elapsing, implicit clocks ν : T → R + are considered.For example, if the current time is θ and ν(t i ) = 2, then it means that t i becomes enabled at time θ − 2 and remains continuously enabled until now.Moreover, if a transition t i is not enabled, then we have ν(t i ) = 0 and the value of the implicit clock of t i remains equal to 0 until t i becomes enabled.
In other words, ν(t i ) can be viewed as a chronometer which starts when the transition t i becomes enabled and which is reset to 0 when the transition becomes disabled or is fired.We define ν = ν + d the function such that, for all t i ∈ T /M , we have ν Therefore, in TPN, there is a transition relation dealing with time elapsing and another one dealing with transition firing: We remark that it is not allowed for time to elapse in such a way that an implicit clock of an enabled transition t i becomes greater than β(t i ) and then We also note that, when time elapses, only implicit clocks of enabled transitions are increased, by definition of ν + d.Concerning a transition firing (ν, M ) ti − → (ν , N ), we remark that, after firing a transition t i , the implicit clocks are updated as follows: -An implicit clock of a transition t j is reset to 0 if t j is not enabled for the new marking N , that is t j ∈ T /N ; -An implicit clock of a transition t j is reset to 0 if t j was the fired transition, that is t i = t j ; -An implicit clock of a transition t j is reset to 0 if t j does not stayed continuously enabled, especially during the step of token consumption (t j ∈ T /(M −pre(ti)) ); -Otherwise, the implicit clock of a transition t j does not change its value.
The reachability relation is formally defined as follows: an − − → (ν , N ) for a i being a delay or a transition.We remark that this relation is obviously transitive and, considering n = 0, is reflexive, (ν, M ) (ν, M ).Considering the TPN of Figure 2, we see that there are four places (P = {p 1 , p 2 , p 3 , p 4 }) and three transitions (T = {t 1 , t 2 , t 3 }).Moreover, a time interval is associated with each transition.We have α(t 3 ) = 1 and β(t 3 ) = 4, meaning that if t 3 becomes enabled at time θ and remains continuously enabled, then this transition may fire after time θ + 1 and must fire before time θ + 4. We can also observe that α(t 2 ) = 2 and β(t 2 ) = ∞, meaning that the transition t 2 may just fire after time θ + 2 (there is no other constraint concerning its firing time).
In this example, we consider that the initial marking is [p 4 , p 4 ].All implicit clocks are initialized to 0, giving ν(t 1 ) = ν(t 2 ) = ν(t 3 ) = 0. We use the denotation 0, 0, 0 to represent the value of all implicit clocks.As 0 2 and 0 1, it is not possible to fire the transition t 2 or t 3 .But, it is possible to let time elapse.We have, for example, ( 0, 0, 0 , [p 4 , p 4 ]) We remark that the implicit clock of t 1 is not equal to 1.5, because this transition is not enabled for [p 4 , p 4 ].Now, as ν(t 3 ) α(t 3 ) (1.5 1), then it is possible to fire the transition t 3 .But it is also possible to let time elapse again: ( 0, 1.5, 1.5 , [p 4 , p 4 ]) − → ( 0, 0, 3.5 , [p 2 , p 3 , p 4 ]).We observe that t 1 becomes enabled, the implicit clock of t 2 is initialized to 0 (t 2 is fired) and the implicit clock of t 3 does not change (t 3 remains continuously enabled during this transition).Now, we suppose that ( 0, 0, 3. ). Finally we show that LSM is able to express properties on TPN.Let E be any set, we denote by card(E) the cardinality of E, that is the number of elements of E. Lemma 9. Let T = (P, T, pre, post, α, β) be a TPN and let Then K is a model.
Proof.It is sufficient to verify that K satisfies Definition 2.
We consider the following function: Proposition 10.Let T = (P, T, pre, post, α, β) be a TPN and let ) and any marking M ∈ M P , we have ν, M K M .
Proof.The proof is by induction on n.
-Base case (n = 0).By Lemma 9, K is a model.Then ν, [] K I, and we have ν, -Inductive case.We suppose that the Proposition holds for all markings that contain n tokens (induction hypothesis), and then prove it for all markings that contain n + 1 tokens.− → ( 0, 0, 0 , [p 1 ]), we have ( 0, 0, 0 , [p 2 ]) ( 0, 0, 0 , [p 1 ]) and 0, 0, 0 , [] K ♦ • p 1 .Here, ♦ • φ expresses that the timed Petri net can reach a state that satisfies φ, but additional resources (tokens) may be needed to achieve it.This modality is also interesting if it is combined with negation.For example, 0, 0, 0 , [p 4 ] K ¬♦ • p 1 expresses that it is not possible, whatever the resources/tokens that are added to the timed Petri net, to reach the marking [p 1 ].Finally, the resourceindexed modality ♦ s φ, allows us to express that adding the marking s, the timed Petri net can reach a marking that satisfies φ.For instance, we have 0, 0, 0 ).In conclusion, we have shown that the LSM models are really used to capture reachability in timed Petri nets.This point comes from the multi-dimension of the structures based on pairs (world, resource).

Conservativity of LSM over S4
In this section, we show that LSM is a conservative extension of the modal logic S4 (e.g., [3]).More specifically, we show that a formula φ is valid in S4 if and only if φ is valid in LSM.Then, with the equivalences of Lemma 6, we have that the resource-indexed modalities properly generalize the S4 modalities.

The logic S4
Let Prop be a countable set of propositional symbols.The language L S4 of S4 is defined as follows, where p ∈ Prop: -reflexivity: w 1 R S4 w 1 , and -transitivity: if w 1 R S4 w 2 and w 2 R S4 w 3 , then w 1 R S4 w 3 , and Definition 12 (Satisfaction relation, validity).Let K S4 = (W S4 , R S4 , V S4 ) be an S4-model.The satisfaction relation K S4 ⊆ W S4 × L S4 is inductively defined, for all w ∈ W S4 , as follows: We say that a formula φ is valid, denoted φ, if and only if, for all worlds w in all models K S4 , w K S4 φ.Now we can establish that LSM is a conservative extension of S4 logic.That is, we show that, for any formula φ ∈ L S4 , we have φ if and only if φ.

From LSM-countermodels to S4-countermodels
In this section, we show how to obtain an S4-countermodel from an LSMcountermodel.
Proof.By induction on the structure of φ.
-Case w, r K .We have (w, r) K S4 , by definition of K S4 .
-Case (w, r) K S4 .We have w, r K , by definition of K .
-Case w, r K ⊥.This case is absurd, by definition of K .
-Case (w, r) K S4 ⊥.This case is absurd, by definition of K S4 .
-Inductive cases.We suppose that the proposition holds for formulae φ and ψ (IH).
-Case w, r K ♦φ.By definition, there are w ∈ W and r ∈ Res such that (w, r)R(w , r ) and w , r K φ.By definition of T LSM →S4 and by the induction hypothesis, there is (w , r ) ∈ W S4 such that (w, r)R S4 (w , r ) and (w , r ) K S4 φ.Then (w, r) K S4 ♦φ.
-Case (w, r) K S4 ♦φ.By definition, there is (w , r ) ∈ W S4 such that (w, r)R S4 (w , r ) and (w , r ) K S4 φ.By inductive hypothesis and by construction, there are w ∈ W and r ∈ Res such that (w, r)R(w , r ) and w , r K φ.Then w, r K ♦φ.
-Case w, r K φ.Let (w , r ) ∈ W S4 such that (w, r)R S4 (w , r ).By definition of T LSM →S4 , we have (w, r)R(w , r ).Then, as w, r K φ then we have w , r K φ.Then, by the induction hypothesis, (w , r ) K S4 φ and we have (w, r) K S4 φ.
By definition of T LSM →S4 , (w, r)R S4 (w , r ).Then, as (w, r) K S4 φ we have (w , r ) K S4 φ.Then, by the induction hypothesis, w , r K φ. and we have w, r K φ.
-The other cases are similar.

From S4-countermodels to LSM-countermodels
We show how to obtain an LSM-countermodel from an S4-countermodel.
Proof.M is a PRM and R is reflexive and transitive, because R S4 is reflexive and transitive.
) be an S4-model and let where M = (Res, •, e).For any formula φ ∈ L S4 and any w ∈ W S4 , we have Proof.By induction on the structure of φ.
-Case w K S4 p.By definition, w ∈ V S4 (p) and by definition of T S4→LSM , (w, e) ∈ V (p).Then w, e K p.
-Case w, e K p.By definition, (w, e) ∈ V (p) and by definition of T S4→LSM , w ∈ V S4 (p).Then w K S4 p.
-Case w K S4 .We have w, e K , by definition of K .
-Case w, e K .We have w K S4 , by definition of K S4 .
-Case w K S4 ⊥.This case is absurd, by definition of K S4 .
-Case w, e K ⊥.This case is absurd, by definition of K .
-Inductive cases.We suppose that this proposition holds for formulae φ and ψ (this is the induction hypothesis).
-Case w K S4 ¬φ.By definition, w K S4 φ and by the induction hypothesis, w, e K φ.Then w, e K ¬φ.
-Case w, e K ¬φ.By definition, w, e K φ and by the induction hypothesis, w K S4 φ.Then w K S4 ¬φ.
-Case w K S4 φ ∧ ψ.By definition, w K S4 φ and w K S4 ψ.By the induction hypothesis, w, e K φ and w, e K ψ.Then w, e K φ ∧ ψ.
-Case w, e K φ ∧ ψ.By definition, w, e K φ and w, e K ψ.By the induction hypothesis, w K S4 φ and w K S4 ψ.Then w K S4 φ ∧ ψ.
-Case w K S4 ♦φ.By definition, there is w ∈ W S4 such that wR S4 w and w K S4 φ.By the induction hypothesis and by construction, there is w ∈ W such that (w, e)R(w , e) and w , e K φ.Then w, e K ♦φ.
-Case w, e K ♦φ.By definition, there are w ∈ W and r ∈ Res such that (w, e)R(w , r ) and w , r K φ.As Res = {e}, by definition of T S4→LSM , we have r = e.Then (w, e)R(w , e) and w , e K φ.
By definition of T S4→LSM and by the induction hypothesis, there is w ∈ W S4 such that wR S4 w and w K S4 φ.Then w K S4 ♦φ.
-Case w K S4 φ.Let w ∈ W and r ∈ Res such that (w, e)R(w , r ).As Res = {e}, by definition of T S4→LSM , we have r = e.Then (w, e)R(w , e).By definition of T S4→LSM , wR S4 w .Thus, as w K S4 φ we have w K S4 φ.Then, by the induction hypothesis, w , e K φ and w , r K φ.Then we have w, e K φ.
-Case w, e K φ.Let w ∈ W S4 such that wR S4 w .By definition of T S4→LSM , (w, e)R(w , e).Then, as w, e K φ we have w , e K φ.
Then, by the induction hypothesis, w K S4 φ and we have w K S4 φ.
-The other cases are similar.
Lemma 20.Let φ a formula of L S4 be a formula.If φ then φ.
Proof.We show that if φ, then φ.We suppose that φ is not valid in S4.
Theorem 21.LSM is a conservative extension of S4 logic.
Proof.Let φ ∈ L S4 be a formula.By Lemmas 16 and 20, φ is valid in LSM ( φ) if and only if φ is valid in S4 ( φ).

A proof system for LSM
In this section, we develop a calculus for the logic LSM in the spirit of the tableaux calculus for BI and BBI [16,17,23], using notions introduced in these papers.Here, we introduce new rules to deal with modalities and also new label constraints to capture the reachability relation R. One main difficulty is to deal with the interaction between the resource constraints, which encode the equality on the resources, and the reachability constraints, which encode the relation R.

Labels for worlds and resources
We first define world and resource labels that are related, respectively, to the sets W and Res.Moreover, to capture the reachability relation (R) and the equality on resources, we introduce two kinds of label constraints.Such labels and constraints allow, in the case of the non validity of a formula, a countermodel to be extracted.
Definition 22 (World labels).L W is an infinite countable set of world labels.We let s and v, possibly subscripted, denote elements of L W .
Definition 23 (Resource labels).L R is a set of resource labels built from the set of resource symbols Σ R \ {e}, an infinite countable set of constants γ R = {c 1 , c 2 , . ..}, a constant 1 ∈ Σ R ∪ γ R , and a function denoted •: is associative, commutative, and has 1 as its unit.
We denote by xy the resource label x • y.In other words, c 1 c 2 c 3 c 3 is the resource label c 1 • c 2 • c 3 • c 3 .Moreover, we say that x is a resource sub-label of y if and only if there exists z such that x • z = y.The set of resource sub-labels of x is denoted E(x).

Definition 24 (Constraints).
A resource constraint is an expression of the form x ∼ y, where x and y are resource labels.A reachability constraint is an expression of the form (u, x) (v, y), where u and v are world labels and x and y are resource labels.

Rules for resource constraints
Rules for reachability constraints A set of constraints C is a set that contains resource constraints and relation constraints.For example, Now, we define the domain and the alphabet of such sets.Let C be a constraint set.The (resource) domain of C is the set of all resource sub-labels appearing in C. In particular, The world/resource alphabet of C is the set of world/resource constants appearing in C. In particular, we have A w (C) = (u,x) (v,y)∈C {u, v} and A r (C) = (Σ R ∪ γ R ) ∩ D r (C).We notice that, for any set of constraints C, Considering the rules of Figure 3, there are seven rules ( 1 , s r , d r , t r , c r , k r1 and k r2 ) that produce resource constraints and there are four rules ( r a1 , r a2 , t a and k a ) that produce reachability constraints.
As it is impossible to close separately a resource constraint set and a reachability constraint set, because of rules k r1 , k r2 , r a1 , r a2 and k a , we choose to consider only one set of resource and reachability constraints (C) rather than two sets (one resource constraint set and one reachability constraint set).
We give an example of rule application.With C = {c 1 ∼ c 2 , c 2 ∼ c 3 , (s 1 , c 1 ) (s 2 , c 4 )}, we can show that (s 1 , c 3 ) (s 2 , c 4 ) ∈ C as follows: (s 1 , c 1 ) (s 2 , c 4 ) It is important to note that the rules r a1 and r a2 (resp.k r1 and k r2 ) are used in Proposition 26 to prove that the rules 1 a l and 1 a l (resp.q l and q r ) can be derived.These rules are used to respectively prove the first and second part of Corollary 27.
Proposition 26.The following rules can be derived from rules of closure of constraints: xk ∼ y Proof.We provide the following deduction trees: (v, 1) (v, 1) Proof.As C ⊆ C, we have A w (C) ⊆ A w (C) and A r (C) ⊆ A r (C).For the converse, we observe that the rules of Figure 3 do not introduce new world/resource constants.
Lemma 30 (Compactness).Let C be a (possibly countably infinite) set of constraints.
Proof.Let C be a set of constraints and c ∈ C be a constraint.If c ∈ C because c ∈ C then by considering C f = {c}, we have C f ⊆ C and c ∈ C f .In the other cases, the constraint c is obtained by rules of Figure 3.We prove the lemma by induction on the size n of the deduction tree of c.
-Base case (n = 0).Case rule 1 : the deduction tree is of the form -Inductive step.We suppose that the properties ( 1) and ( 2) hold for deduction trees whose sizes are less or equal to n (IH).We prove the lemma for deduction trees such that their sizes are equal to n + 1.
-Case s r : the deduction tree is of the form . . .
In this case, c is the constraint y ∼ x.This deduction tree is finite, and the deduction tree of x ∼ y has size equal to n.Then, by the induction hypothesis, there is a finite set C f ⊆ C such that x ∼ y ∈ C f .Thus, by the rule s r , y ∼ x ∈ C f .
-Case c r : the deduction tree is of the form . . .-The other cases are similar.

A tableaux calculus for LSM
In this section, we define a labelled tableaux calculus for LSM in the spirit of previous works for BI and BBI [16,17,23].
Definition 31.The function .: Σ R → L r is defined as follows: A CSS F, C is finite if F and C are finite.The relation is defined by: Proof.By induction on the number of labelled formulae that belong to F f and using Lemma 30.
Figures 4 and 5 present the rules of tableaux calculus for LSM, the later including the rules on modalities.Let us note that 's i is a new label constant' means s i ∈ L w \ A w (C) and that 'c i and c j are new label constants' means c i = c j ∈ γ R \ A r (C).We denote by ⊕ the concatenation of lists.
with si, ci and cj being new label constants and r = 1 if r = e, otherwise r.Definition 34 (Tableaux).Let F 0 , C 0 be a finite CSS.A tableau for this CSS is a list of CSS, called branches, built inductively according the following rules: is an instance of a rule of Figures 4 and 5 for which cond F, C is fulfilled, then the list We can show that that the rules of Figures 4 and 5 preserve the property (P css ) of Definition 32 (using Corollary 27).
Observing the rules we can say that there are two particular kinds of rules.First there are the rules TI , T * , F− * , T♦ y , F y T♦ , F , T♦ • , and F • .They introduce new constraints and also new label constants (s i , c i and c j ), except for TI that only introduces a new constraint.We illustrate the T♦ rule.When we apply this rule on a labelled formula T♦φ : (s 2 , c 4 ) that belongs to a CSS F, C , we have to choose a new world label and a new resource label which does not appear in C. For example, we suppose that s 5 ∈ L w \A w (C) and c 6 ∈ γ R \ A r (C).Thus, choosing these labels, we can apply the rule, getting the new CSS F ∪ {Tφ : (s 5 , c 6 )}, C ∪ {(s 2 , c 4 ) (s 5 , c 6 )} .We remark that the new reachability constraint (s 2 , c 4 ) (s 5 , c 6 ) added to the set of constraints.There are rules F * , T− * , F♦ y , T y F♦ , T , F♦ • , and T • .They have a condition on the closure of label constraints.In order to apply one of these rules we have to choose labels which satisfy the condition and then apply the rule using it.Otherwise, we cannot apply the rule.We illustrate the T rule.Consider a CSS F, C such that T φ : (s 1 , c 1 ) ∈ F. To apply this rule, we have to choose a world label u and a resource label x such that (s 1 , c 1 ) (u, x) ∈ C. We also suppose that (s 1 , c 1 ) (s 2 , c 3 ) ∈ C. Then we can decide to apply the rule using s 2 and c 3 , getting the CSS F ∪ {Tφ : (s 2 , c 3 )}, C .Finally, we observe that the rules T♦ y , F♦ y , T y and F y use the function .that converts the unit resource e into the unit resource label 1.

Definition 35 (Closure condition).
A CSS F, C is closed if one of the following conditions holds: A CSS is open iff it is not closed.A tableau is closed iff all its branches are closed.A proof for a formula φ is a closed tableau for φ.

Soundness
The soundness proof uses similar techniques than the ones used in BI labelled tableaux method [16,17].The key point is the notion of realizability of a CSS F, C , that means there exists a model K and embeddings from world labels to the world set ( .w ) and resource labels to the resource set ( .r ) of K such that if Tφ : (u, x) ∈ F then u w , x r K φ and if Fφ : (u, x) ∈ F then u w , x r K φ.To obtain such embedding, we consider two functions .w : A w (C) → W and .r : A r (C) → Res.
We remark, by Proposition 29, that .w is defined on A w (C).Then, such .r functions will be implicitly extended to D r (C) r and 1 r = e.Moreover x r can be undefined, because resource composition is partial.
A CSS is realizable if there exists a realization of this CSS and a tableau is realizable if at least one of its branches is realizable.
Proposition 37. Let F, C be a CSS and R = (K, .w , .r ) a realization of it.The following properties hold: Proof.This proof is a direct extension of the proof of the same proposition developed in previous works [9,23].
Lemma 38.The rules of the tableaux method for LSM preserve realizability.
Proof.Let T a realizable tableau.By definition, T contains a realizable branch B = F, C .Let R = (K, .w , .r ) be a realization of the branch B, where K = (W, M, R, V ), .w : A w (C) → W and .r : D r (C) → Res.If we apply a rule on a labelled formula of another branch than B, then this B is not modified, then T stays realizable.Else, we proceed by cases on the formula to which the rule is applied.We only present the cases related to modalities, the other being already checked in previous works on BBI.
We have u w , x r K ♦ y φ.Then, there are w ∈ W and r ∈ Res such that x r • y ↓ and ( u w , x r • y)R(w, r) and w, r K φ.As s i and c i are a new label constants, s i w and c i r are not defined.Then we can extend R such that s i w = w and c i r = r.We also remark that the rule introduces the resource label y .There are three cases.
-If y = e then y = 1 and we have y r = 1 r = e = y.
-If y = e and y ∈ A r (C) then y = y and we have y r = y r = y.
-If y = e and y ∈ A r (C) then we can extend the realization by setting y r = y.
By realization, we have u w , x r K ♦ • φ.Then, by definition, for all w ∈ W and r ∈ Res such that x r • y ↓ and ( u w , x r • y)R(w, r), we have w, r K φ.By rule condition, (u, x • y ) (v, z) ∈ C. Thus, by Proposition 37, ( u w , x • y r )R( v w , z r ).There are two cases.
-If y = e then y = 1 and we have y r = 1 r = e = y.
-If y = e then y = y and we have y r = y r = y.
Thus, we have y r = y.Remarking that x • y r ↓ and x • y r = x r • y r , we have v w , z r K φ and we can conclude that R is a realization of the new branch F ∪ {Fφ : (v, z)}, C .
We have u w , x r K ♦φ.Then, there are w ∈ W and r ∈ Res such that ( u w , x r )R(w, r) and w, r K φ.As s i and c i are a new label constants, then s i w and c i r are not defined.Then we can extend R such that s i w = w and c i r = r.Then we obtain a realization of F, C ∪ {(u, x) (s i , c i )} , which is a realization of the new branch F ∪ {Tφ : (s i , c i )}, C ∪ {(u, x) (s i , c i )} .
By realization, we have u w , x r K ♦φ.Then, by definition, for all w ∈ W and r ∈ Res such that ( u w , x r )R(w, r), we have w, r K φ.By rule condition, (u, x) (v, y) ∈ C. Thus, by Proposition 37, ( u w , x r )R( v w , y r ).Therefore v w , y r K φ and we can conclude that R is a realization of the new branch F ∪ {Fφ : (v, y)}, C .
We have u w , x r K ♦ • φ.Then, there are w ∈ W and s, r ∈ Res such that x r • s ↓ and ( u w , x r • s)R(w, r) and w, r K φ.As s i , c i and c j are a new label constants, s i w , c i r and c j r are not defined.Moreover, as c i = c j then we can extend R such that s i w = w and c i r = s and c j r = r.Remarking that x r • c i r ↓ and, by implicit extension, -The other cases are similar.
Lemma 39.Closed branches are not realizable.
Proof.Let F, C a closed branch.We suppose that this branch is realizable.Let R = (K, .w , .r ) a realization of it.There are four cases.
By definition of realization and Proposition 37, we have u w , x r K φ, u w , y r K φ and x r = y r .This case is absurd.
By definition of realization and Proposition 37, u w , x r K I and x r = e.This case is absurd.
By definition of realization, u w , x r K , which is absurd.
By definition of realization, u w , x r K ⊥, which is absurd.
As all cases are absurd, we conclude that F, C is not realizable.
Theorem 40 (Soundness).If there exists a proof for a formula φ then φ is valid.
Proof.Suppose that there exists a proof for φ.Then there is a closed tableau T φ for the CSS C = {Fφ : (s 1 , c 1 )}, {(s 1 , c 1 ) (s 1 , c 1 )} .Now suppose that φ is not valid.Then there is a countermodel K = (W, M, R, V ), a world w ∈ W , and a resource r ∈ Res such that w, r K φ.Let R = (K, .w , .r ) such that s 1 w = w, c 1 r = r and 1 r = e.Note that R is a realization of C. By Lemma 38, T φ is realizable.By Lemma 39, T φ cannot be closed.But, this is absurd because T φ is a proof and then a closed tableau.Therefore φ is valid.

Tableaux examples
We first build a tableau for formula φ ≡ (( (s 1 , c 1 )} ] is a tableau for φ.In order to represent tableaux, we use the following representation: Figure 6: Tableau for (( The column on left represents the sets of labelled formulae of the CSS of the tableau ([F]) and the column on the right represents the constraint sets of the CSS of the tableau ([C]).Applying rules on this tableau, we obtain the tableau of Figure 6 for φ.We decorate a labelled formula with √ i to show that we apply a rule on this formula at step i.
We give more details about the rule applications at steps 3, 7, and 8.At step 3, we apply a rule on the labelled formula T♦R : (s 1 , c 1 ).To apply the rule T♦ , we have to choose a new world label (s 2 ) and a new resource label (c 2 ).Then, the rule introduces in the branch the labelled formula TR : (s 2 , c 2 ) and the constraint (s 1 , c 1 ) (s 2 , c 2 ).Concerning step 7, we apply the rule T • on the labelled formula T • Q : (s 1 , c 3 ).Then we have to choose v, y and z such that (s Thus it is possible to apply this rule choosing v = s 2 , y = c 4 and z = c 2 , adding to the branch the labelled formula TQ : (s 2 , c 2 ).
For step 8, we apply the rule F y on the labelled formula F e R : (s 2 , c 2 ).This rule introduces the labelled formula FR : (s 2 , c 2 ) and the constraint ( . . .
. . .FP : (s 1 , 1) Finally, we observe that the tableau's branches are closed (denoted ×), so this tableau is a proof for the formula (( Therefore, by Theorem 40, the formula is valid. We consider another example of tableau for the formula ♦ r (P * Q) → (♦P * ♦Q).By applying tableaux rules, we obtain the tableau of Figure 7 with branches that are not closed.

Countermodel extraction
In this section, we present a countermodel extraction method that will be used to show the completeness of the tableaux calculus with respect to the model-theoretic semantics defined in Section 2. The method consists in transforming the reachability constraint set and the resource constraint set of a branch F, C into a model K such that if Tφ : (u, x) ∈ F, then u, [x] K φ and, if Fφ : (u, x) ∈ F, then u, [x] K φ, where [x] is the equivalence class of x.
The first step is to saturate the labelled formula of the branch (also known as 'obtaining a Hintikka CSS').
Definition 41 (Hintikka CSS).A CSS F, C is a Hintikka CSS iff, for any formulae φ, ψ ∈ L, any world label u ∈ L w , any resource label x, y ∈ L r , and any resource symbol r ∈ Σ R , we have the following: This definition is an extension of the similar definition given in previous works [9,23] with the conditions from ( 18) to ( 29) that correspond to the treatment of the modalities.Let us note that the conditions (1), ( 2), (3), and (4) certify that a Hintikka CSS is not closed and the other conditions certify that all labelled formulae of a Hintikka CSS are saturated.
In order to extract a countermodel from a Hintikka CSS, we must build equivalence classes.The equivalence class of x ∈ D r (C), denoted [x], is the set the set of all equivalence classes of D r (C).We highlight that ∼ is an equivalence relation, because it is is reflexive (by Corollary 27), symmetric (by rule s r ) and transitive (by rule t r ).Now, we give the definition of a function Ω that extracts a countermodel from a Hintikka CSS.
Definition 42 (Function Ω).Let F, C be a Hintikka CSS.The function For all r ∈ Σ R such that r ∈ D r (C), we have that r = [ r ].Moreover, we consider that, for all r ∈ Σ R such that r ∈ D r (C), we have r is not defined (is not a resource).Note that our definition is well-formed for the case in which r = e ∈ Σ R .Indeed, as e = 1 and 1 ∈ D -The other cases are similar.
Lemma 45.Let F, C be a Hintikka CSS such that Fφ : (u, x) ∈ F. The formula φ is not valid and Ω( F, C ) is a countermodel of φ.
Proof.Let F, C be a Hintikka CSS such that Fφ : (u, x) ∈ F. Let K = Ω( F, C ).By Lemma 43, K is a model.As F, C is a CSS, by (P css ) and Proposition 29, u ∈ A w (C) and x ∈ D r (C).Thus, by Lemma 44, we have u, [x] K φ.Therefore K is a countermodel of the formula φ and we can conclude that φ is not valid.
If we consider the tableau for the formula ♦ r (P * Q) → (♦P * ♦Q) in Figure 7, it contains a branch (denoted B) which is a Hintikka CSS.By Lemma 45, ♦ r (P * Q) → (♦P * ♦Q) is not valid and Ω(B) is a countermodel for this formula.
We extract this countermodel, using Definition 42.We have K = Ω(B) = (W, M, R, V ), where M = (Res, •, e), such that (recall that we abuse notation and write r for r ), -• is defined by ) and (w, r)R(w, r), for all w ∈ W and r ∈ Res, and Proof.The proof is an adaptation for our modalities of the similar proof proposed in [9,23] which is already an adaptation of proof of completeness of tableaux for first-order logic [15].The proof developed in [9] gives all the necessary notions to derive this proof in detail.
In order to show the completeness of our tableau calculus, we consider a formula ϕ for which there exists no proof and we show that there exists a countermodel for this formula.
We denote by T 0 the initial tableau for ϕ.Then, we have , and 2. T 0 cannot be closed.Now, we present a way to obtain a Hintikka CSS that will allow us to conclude to the completeness.By Lemma 50, there exists an oracle that contains every finite CSS for which there exists no closed tableau.We denote by P this oracle.
By Proposition 47, there exists a fair strategy.We denote by S this strategy and S i χ i : (u i , x i ) the i th formula of S. As T 0 cannot be closed, its unique branch belongs to the oracle, that is {Fϕ : Now we build a sequence F i , C i i 0 as follows: - ∪ C e such that F e and C e are determined by: Proposition 51.For any i ∈ N, the following properties hold: 1. Fϕ : (s 1 , c 1 ) ∈ F i and (s 1 , c 1 ) (s 1 , c 1 ) ∈ C i ;

Conclusion
We have defined and studied an extension of the modal logic S4, called LSM, that introduces the notion of resource, and corresponding separating modalities, in its models.This logic directly and naturally supports reasoning about the manipulation of resources by a system.The resource semantics upon which LSM is based is that of BI [30,33,16,17], further informed by the treatment of modality considered in [6,5,8].
We have proposed a model-theoretic semantics for LSM and have given a labelled tableaux calculus that is proved sound and complete.Moreover, we provide a countermodel extraction method in case for non-valid formulae.
We have considered a range of examples -essentially classic distributed systems examples -that can be described naturally in LSM.Specifically, we have considered mutual exclusion, producer-consumer systems, and timed Petri nets.These examples illustrate the use and expressiveness of the separating modalities.They serve to illustrate the relative natural expressiveness of the modalities which, although all definable in terms of the basic resource-shifted ♦ r and r , are convenient for the illustrated modelling examples.
There are many promising directions for future work that we intend to pursue.We summarize them here in order to support some of the initial choices made here.We begin with the core theoretical topics.
-Systematic exploration of the structure of multi-dimensional models.Here we have considered a two-dimensional set-up that employs a simple pairing of worlds.More generally, one might consider, with or without the resource interpretation, n-dimensional models in which worlds may combined using the evident notion of bunching.
-Integration of the systems considered in this paper and in our proposed further work into a general co-algebraic perspective.
Considering applications, in particular those in program analysis and verification, we can consider the relationship between our work and concurrent separation logic [27].Concurrent separation logic is built upon the resource semantics of bunched logic and handles concurrent processes in the style of Hoare logic.We conjecture that our treatment of resource semantics can be used to support concurrent separation logic too.
We remark that, in general, there is a more-or-less straightforward relationship between Hoare-style presentations of program logics and logically more standard presentations based on a satisfaction relation between a model and a propositional formula.Hoare-style systems are based on assertions of the form for logical formulae φ and ψ and program commands C, with essentially Hilbert-type proof-systems, whereas more standard semantic presentations are formulated along the lines of where M is a model and w is a choice of world.In establishing the relationship between this view and Hoare-style presentations, we take a model with worlds given by program states (S, T , etc.) and consider how states evolve as programs perform actions C by executing commands; that is, S Reynolds' Separation Logic [35], which employs a Hoare-style presentation, and Ishtiaq and O'Hearn's Pointer Logic [22], which employs a semantic presentation, enrich this view of reasoning about programs by introducing the BI's concept of resource semantics in order to reason about mutable data structures.
In concurrent separation logic, the rule for the concurrent product of n ≥ 2 commands has the form where no variable free in φ i or ψ i is changed in C j when j = i.In the resourceprocess calculi considered in [6,4,5], the multiplicative conjunction is also intimately connected to the concurrent product: Here we employ two-dimensional worlds in order to make assertions about the states of systems in which resources and processes co-evolve according to an operational semantics based on judgements of the form R, E a → R , E , understood as asserting that the process E evolves by performing action a relative to available resources R so as to become the process E with available resources R .
This example suggests that it would be interesting, and possibly of value for program analysis and verification, to consider classes of models in which some of the dimensions of the model are generated by the operational semantics of a programming language.Such models will have associated action modalities (cf.[6,4,5,8]).
-In the last case, F i+1 , C i+1 = F i ∪ {S i χ i : (u i , x i )}, C i .By hypothesis, F i ∪{S i χ i : (u i , x i )}, C i ∈ P, then Property 3 holds.Properties 4 and 5 hold by the induction hypothesis, because A w (C n+1 ) = A w (C n ) and A r (C n+1 ) = A r (C n ).
As F f and C f are finite and as the sequence F i , C i i 0 is increasing by Property 2 of Proposition 51, there is j ∈ N such that F f , C f F j , C j .By Property 3 of Proposition 51, F j , C j ∈ P. As P is -closed, we have F f , C f ∈ P. Thus for all F f , C f f F ∞ , C ∞ , we have F f , C f ∈ P. Therefore F ∞ , C ∞ ∈ P, because P is of finite character.2. Let Sφ : (u, x) such that F ∞ ∪ {Sφ : (u, x)}, C ∞ ∈ P. By property (P css ), (u, 1) (u, 1) ∈ C ∞ and x ∼ x ∈ C ∞ .By compactness (Lemma 30), there are C f 1 ⊆ C ∞ and C f 2 ⊆ C ∞ such that C f 1 and C f 2 are finite and (u, 1) (u, 1) ∈ C f 1 and x ∼ x ∈ C f 2 .As the sequence is increasing, by Property 2 of Proposition 51, there are j 1 , j 2 ∈ N such that C f 1 ⊆ C j1 and C f 2 ⊆ C j2 .Let j = max(j 1 , j 2 ).As the sequence is increasing, we have C f 1 ⊆ C j and C f 2 ⊆ C j .As Sφ : (u, x) occurs infinitely many times in our fair strategy S, there is k j such that S k F k : (u k , x k ) = Sφ : (u, x).Moreover, C j ⊆ C k .Then (u, 1) (u, 1) ∈ C k and x ∼ x ∈ C k .Thus F k ∪ {Sφ : (u, x)}, C k is a CSS (satisfies the property (P css )) and F k ∪ {Sφ : (u, x)}, C k F ∞ ∪ {Sφ : (u, x)}, C ∞ , by definition of limit CSS.As P is -closed, F k ∪ {Sφ : (u, x)}, C k ∈ P. By construction of F k+1 , C k+1 , Sφ : (u, x) ∈ F k+1 .Therefore Sφ : (u, x) ∈ F ∞ .

and w, r 1 K
φ and w, r 2 K ψ w, r K φ − * ψ iff for all r ∈ Res if (r • r ↓ and w, r K φ) then w, r • r K ψ w, r K ♦ s φ iff there exist w ∈ W and r ∈ Res such that r • s ↓, (w, r • s)R(w , r ) and w , r K φ w, r K s φ iff for all w ∈ W and all r ∈ Res, if (r • s ↓ and (w, r • s)R(w , r )) then w , r K φ

Figure 1 :
Figure 1: Example of processes in mutual exclusion

Figure 3 :
Figure 3: Rules for constraints for any non-empty C = ∅, because 1 ∈ E(x), for all resource labels x.Definition 25 (Closure of constraints).Let C be a set of constraints.The closure of C, denoted C, is the least relation closed under the rules of Figure 3 such that C ⊆ C.

Proof. 1 .Proposition 29 .
We suppose that u ∈ A w (C).By definition u ∈ (v,x) (w,y)∈C {v, w}.Then there exists (v, x) (w, y) ∈ C such that u = v or u = w.Thus, by Proposition 26, (u, 1) (u, 1) ∈ C. Now, we suppose that (u, 1) (u, 1) ∈ C.Then, by definition, u ∈ A w (C).In conclusion, we have u ∈ A w (C) if and only if (u, 1) (u, 1) ∈ C. 2. We suppose that x ∈ D r (C).By definition we havex ∈ y∼z∈C (E(y) ∪ E(z)) or x ∈ (u,y) (v,z)∈C (E(y) ∪ E(z)).There are two cases: -there exists y ∼ z ∈ C such that x ∈ E(y) ∪ E(z).Then there exists a resource label k such that xk ∼ z ∈ C or y ∼ xk ∈ C. Thus, by Proposition 26, x ∼ x ∈ C; -there exists (u, y) (v, z) ∈ C such that x ∈ E(y)∪E(z).Then there exists a resource label k such that (u, xk) (v, z) ∈ C or (u, y) (v, xk) ∈ C.Then, by Proposition 26, x ∼ x ∈ C. If we suppose that x ∼ x ∈ C, then, by definition, x ∈ D r (C) and we have x ∈ D r (C) if and only if x ∼ x ∈ C. We can deduce by using the rules s r and t r with Corollary 27 that ∼ is an equivalence relation and then ∼ is reflexive.Moreover the first part of Corollary 27 allows us to show that is reflexive.Corollary 28.Let C be a set of constraints.If xy ∈ D r (C), x ∼ x ∈ C, and y ∼ y ∈ C, then xy ∼ x y ∈ C. Proof.By Corollary 27, xy ∼ xy ∈ C. We give the following deduction tree: Let C a set of constraints.We have A w (C) = A w (C) and A r (C) = A r (C).
In this case, c is the constraint xk ∼ yk.This deduction tree is finite, and the deduction trees of x ∼ y and yk ∼ yk have size less than or equal to n.Then, by the induction hypothesis, there are C f1 ⊆ C and C f2 ⊆ C that are finite and such that x ∼ y ∈ C f1 and yk ∼ yk ∈ C f2 .Let C f = C f1 ∪ C f2 .Then x ∼ y ∈ C f and yk ∼ yk ∈ C f .Thus, using the rule c r , xk ∼ yk ∈ C f .Moreover, C f is finite as the union of two finite sets and C f ⊆ C as the union of two sets included in C.

Figure 5 :
Figure 5: Tableaux modal rules for LSM By realization, we have u w , x r K ♦ • φ.Then, by definition, for all w ∈ W and s, r ∈ Res such that x r • s ↓ and ( u w , x r • s)R(w, r), we have w, r K φ.By rule condition, (u, xy) (v, z) ∈ C. Thus, by Proposition 37, ( u w , xy r )R( v w , z r ).Remarking that xy r ↓ and xy r = x r • y r (by the definition of realization), we have v w , z r K φ and then R is a realization of the new branch F ∪ {Fφ : (v, z)}, C .

C→
T .To see how this works we need to consider how such commands generate logical modalities.DefineS |= M [C]φ iff for every evolution S C → T , T |= M φ,which asserts that the program must have property φ after executing command c provided that whenever C evolves S to T , the state T has property φ.Thus, a Hoare-style assertion, { φ } C { ψ }, in which the command C evolves the program state from S to T essentially corresponds to a semantic assertionS |= M φ → [C]ψ.