Modelling Mutual Exclusion in a Process Algebra with Time-outs

I show that in a standard process algebra extended with time-outs one can correctly model mutual exclusion in such a way that starvation-freedom holds without assuming fairness or justness, even when one makes the problem more challenging by assuming memory accesses to be atomic. This can be achieved only when dropping the requirement of speed independence.


Introduction
A mutual exclusion protocol mediates between competing processes to make sure that at any time at most one of them visits a so-called critical section in its code. Such a protocol is starvation-free when each process that intends to enter its critical section will eventually be allowed to do so.
As shown in [KW97,Vog02,GH15b], it is fundamentally impossible to correctly model a mutual exclusion protocols as a Petri net or in standard process algebras, such as CCS [Mil90], CSP [BHR84,Hoa85] or ACP [BW90,Fok00], unless starvation-freedom hinges on a fairness assumption. The latter, in the view of [GH15b], does not provide an adequate solution, as fairness assumptions are in many situations unwarranted and lead to false conclusions.
In [DGH17] a correct process-algebraic rendering of mutual exclusion is given, but only after making two important modifications to standard process algebra. The first involves making a justness assumption. Here justness [GH15a,GH19] is an alternative to fairness, in some sense a much weaker form of fairness-meaning weaker than weak fairness. Unlike (strong or weak) fairness, its use typically is warranted and does not lead to false conclusions. The second modification is the addition of a nonstandard construct-signals-to CCS, or any other standard process algebra. Interestingly, both modifications are necessary; merely assuming justness, or merely adding signals, is insufficient.
A similar process-algebraic rendering of mutual exclusion was given earlier in [CDV09], using a fairness assumption proposed in [CDV06] under the same fairness of actions. In [GH19] fairness of actions (there called fairness of events) was seen to coincide with justness.
Bouwman [Bou18,BLW20] points out that it is possible to correctly model mutual exclusion without adding signals to the language at all, instead reformulating the justness requirement in such a way that it effectively turns some actions into signals. Since the justness assumption was fairly new, and needed to be carefully defined to describe its interaction with signals anyway, redefining it to better capture read actions in mutual exclusion protocols is a plausible solution.
Yet justness is essential in all the above approaches. This may be seen as problematic, because large parts of the foundations of process algebra are incompatible with justness, and hence need to be thoroughly reformulated in a justness-friendly way. This is pointed out in [Gla19b]. 1 In [GH15b], the inability to correctly capture mutual exclusion in CCS and related process algebras was seen as a sign that these process algebras lack some degree of universal expressiveness, rather than as a statement about the impossibility of mutual exclusion. The repairs in [CDV09,DGH17,Bou18,BLW20] seek to rectify this lack of expressiveness, either by considering language extensions, or by changing the definition of justness for the language. My presentation [Gla18] took a different perspective, and claimed that the impossibility results of [KW97,Vog02,GH15b] can be seen as saying something about the real world, rather than about formalisms we use to model it. The argument rests on two crucial features of mutual exclusion protocols that I call atomicity and speed independence. Instead of protocol features they can also be seen as assumptions on the hardware on which the mutual exclusion protocol will be running.
Atomicity is the assumption that memory accesses such as reads and writes take a positive amount of time, yet two such accesses to the same store or register cannot overlap in time, so that a second memory access can take place only after a first access is completed. 2 Speed independence says that nothing may be assumed about the relative speed of processes competing for access to the critical section, or for read/write access to some register. In particular, if two processes are engaged in a race, and one of them has nothing else to do but performing the action that wins the race, whilst the other has a long list of tasks that must be done first, it may still happen that the other process wins.
When rejecting solutions to the mutual exclusion problem that are merely probabilistically correct, or where starvation-freedom hinges on a fairness assumption, [Gla18] claims, although without written evidence, that when assuming atomicity as well as speed independence, mutual exclusion is impossible. Section 22 of the present paper illustrates and substantiates this claim for Peterson's mutual exclusion protocol.
In [GH19] the notion of justness from [GH15a] was reformulated in terms of a concurrency relation • between the transitions in a labelled transition system. This relation may be inherited from a similar relation between the transitions of a Petri net or the instructions in the pseudocode of protocol descriptions. Here t • u means that transition or instruction u uses a resource that is needed to perform transition or instruction t, so that if u occurs prior to, or instead of, t, it is not possible for t to commence before u is finished. The definitions of justness from [GH15a] and [GH19] where shown equivalent in [Gla19a].
The assumption of atomicity can be formulated directly in terms of the concurrency relation • . If and m are two transitions or instructions accessing the same register, the assumption of atomicity can be expressed as • m. It says that when m wins the race for access to this register, transition or instruction cannot take place until m is completed. The case that is relevant for the mutual exclusion problem is where writes and m reads. I see only two alternatives to • m. One is that the memory accesses and m overlap in time. This possibility has been investigated by Lamport [Lam74], who assumes that a read action that overlaps with a write on the same register can return any possible value of that register. Since the return of an unexpected value increases the set of possible behaviours of a mutual exclusion protocol, Lamport implicitly takes the position that assuming overlap of actions makes the mutual exclusion problem more challenging than assuming atomicity. Yet, he shows that his bakery algorithm [Lam74] constitutes an entirely correct solution. It moreover trivially is speed independent. However, according to [Gla18] atomicity is the more challenging assumption, as when assuming overlap a correct speed-independent solution exists, and when assuming atomicity it does not.
The second alternative to • m retains the assumption that memory accesses to the same register cannot overlap in time, but assumes write actions to have priority over reads. A write simply aborts a read that happens to be in progress, which can restart after the write is over. Following [CDV09], I refer to this assumption as non-blocking reading. When is a write action and m a read of the same register, it stipulates that • m, yet m • . This yields an asymmetric concurrency relation, which was not foreseen in classical treatments of concurrency [NPW81, GM84, BC87, Win87, GV87, DDM87,Old87].
The assumption of speed independence is built in in CCS and Petri nets, in the sense that any correct mutual exclusion protocol formalised therein is automatically speed independent. This is because these models lack the expressiveness to make anything dependent on speed. In Section 4.4, following [Gla19b], I define a (symmetric) concurrency relation between Petri net transitions and between CCS transitions that is consistent with the work in [NPW81,GM84,BC87,Win87,GV87,DDM87,Old87]. It always yields • m when and m both access the same register. When taking this concurrency relation as an integral part of semantics of CCS or Petri nets, it follows that also the assumption of atomicity is built in in these frameworks. This makes the impossibility results of [KW97,Vog02,GH15b] special cases of the impossibility claim from [Gla18]. The latter can be seen as a generalisation of the former that is not dependent on a particular modelling framework.
The process algebras proposed in [CDV09] and [DGH17] model the possibility of nonblocking reading. This is what enables a correct rendering of speed independent mutual exclusion without resorting to a fairness assumption. The correct modelling of mutual exclusion within CCS as proposed by Bouwman [Bou18] also exploits non-blocking reading. The justness assumption as formulated by Bouwman can in retrospect be seen as an instance of justness as defined in [GH19], but based on a concurrency relation • between CCS transitions that essentially differs from the one in Section 4.4, and that is not consistent with the work in [NPW81, GM84, BC87, Win87, GV87, DDM87, Old87], although it is entirely consistent with the interleaving semantics of CCS given by Milner [Mil90]. The claim in [GH15b,DGH17] that mutual exclusion cannot be rendered satisfactory in CCS holds only when seeing the concurrency relation of Section 4.4 (or the resulting notion of justness) as an integral part of this language, and hence is not in contradiction with the findings of Bouwman [Bou18].
In [Gla21] I extended standard process algebra with a time-out operator, thereby increasing its absolute expressiveness, while remaining within the realm of untimed process algebra, in the sense that the progress of time is not quantified. The present paper shows that the addition of time-outs to standard process algebra makes it possible to correctly model mutual exclusion under the assumption of atomicity, such that starvation-freedom holds without assuming fairness. My witness for this claim will be a model of Peterson's mutual exclusion protocol. In view of the above, this model will not be speed independent.
Moreover, starvation-freedom can be shown to hold, not only without assuming fairness, but even without assuming justness. Instead, one should make the assumption called progress in [GH19], which is weaker than justness, uncontroversial, unproblematic, and made (explicitly or implicitly) in virtually all papers dealing with issues like mutual exclusion. In contrast, [Gla18] claims that even when dropping atomicity it is not possible to correctly model mutual exclusion in a speed-independent way without at least assuming justness to obtain starvation-freedom. Section 16/17 of the present paper illustrates and substantiates also that claim for Peterson's mutual exclusion protocol.
Reading Guide. Part IV of this paper shows how Peterson's mutual exclusion protocol can be modelled in an extension of CCS with time-outs. This process algebra assumes atomicity, as one has • m whenever m and are read and write transitions on the same register. The model satisfies all basic requirements for mutual exclusion protocols, and in addition achieves starvation-freedom without assuming more than progress.
Part III recalls all impossibility claims discussed above, and illustrates or substantiates them for Peterson's mutual exclusion protocol. To make the impossibility claims precise, I have to define unambiguously what does and what does not constitute a correct mutual exclusion protocol. This happens in Part II. That part also covers fair schedulers [GH15b], which are akin to mutual exclusion protocols, and were used in [GH15b] as a stepping stone to prove the impossibility result for mutual exclusion in CCS. I formalise four requirements on fair schedulers and six on mutual exclusion protocols that in combination determine their correctness. Some of these requirements, including starvation-freedom for mutual exclusion protocols, are parametrised with an assumption such as progress, justness or fairness, that needs to be made to fulfil this requirement. This leads to a hierarchy of quality criteria for fair schedulers and mutual exclusion protocols, where the quality of such a protocol is higher when it depends on a weaker assumption. I also propose two related mutual exclusion protocols, the gatekeeper and the encapsulated gatekeeper, that meet all correctness criteria when allowing (weak) fairness as parameter in some of the requirements. As I expect most researchers in the area of mutual exclusion to agree with me that the (encapsulated) gatekeeper is not an acceptable protocol, this underpins the verdict of [GH15b] that assuming (weak) fairness does not yield an acceptable solution.
The requirements on fair schedulers and mutual exclusion protocols in Part II are formulated in the language of linear-time temporal logic [Pnu77,HR04]. However, standard treatments of temporal logic turned out to be inadequate to formalise these requirements. For this reason, Part I presents a form of temporal logic that is adapted for the study of reactive systems, interacting with their environments through synchronisation of actions. (Reactive) temporal logic primarily applies to distributed systems formalised as states in a Kripke structure. However, it smoothly lifts to distributed systems formalised, for instance, as states in labelled transition systems, as Petri nets, or as expressions in a process algebra like CCS. As explained in Section 4, this is achieved through canonical translations from (states in) labelled transition systems to (states in) Kripke structures, and from Petri nets or process algebra expressions to states in labelled transition systems. Assumptions such as progress, justness and fairness are gathered under the heading completeness criteria, as in essence they say which execution paths are regarded as complete runs of a represented system. These criteria are incorporated in reactive temporal logic. To capture justness, Section 4.4 defines a concurrency relation • on the labelled transition systems that occur in the translation steps from CCS or Petri nets to Kripke structures.
As a reading guide, I offer a table of contents and the diagram of and 9-12 are taken from [Gla20b]; the only added novelty is the treatment of the next-state operator X in reactive linear-time temporal logic, and the corresponding mild simplification of the requirements on fair schedulers and mutual exclusion protocols in Sections 11 and 12. Section 7, on translating reactive temporal logic into standard temporal logic, is also new here, as well as Section 8, characterising a fragment of linear-time temporal logic that denotes safety formulas. On that fragment there is no difference between reactive and standard temporal logic. Whereas standard treatments of temporal logic are adequate for closed systems, having no run-time interactions with their environment, they fall short for reactive systems, interacting with their environments through synchronisation of actions. Here I present reactive temporal logic [Gla20b], a form of temporal logic adapted for the study of reactive systems.

Motivation
Labelled transition systems are a common model of distributed systems. They consist of sets of states, also called processes, and transitions-each transition going from a source state to a target state. A given distributed system D corresponds to a state P in a transition system T-the initial state of D. The other states of D are the processes in T that are reachable from P by following the transitions. The transitions are labelled by actions, either visible ones or the invisible action τ . Whereas a τ -labelled transition represents a state-change that can be made spontaneously by the represented system, a-labelled transitions, for a = τ , merely represent potential activities of D, for they require cooperation from the environment in which D will be running, sometimes identified with the user of system D. A typical example is the acceptance of a coin by a vending machine. For this transition to occur, the vending machine should be in a state where it is enabled, i.e., the opening for inserting coins should not be closed off, but also the user of the system should partake by inserting the coin. c p Consider a vending machine that alternatingly accepts a coin (c) and produces a pretzel (p). Its labelled transition system is depicted on the right. In standard temporal logic one can express that each action c is followed by p: whenever a coin is inserted, a pretzel will be produced. Aligned with intuition, this formula is valid for the depicted system. However, by symmetry one obtains the validity of a formula saying that each p is followed by a c: whenever a pretzel is produced, eventually a new coin will be inserted. But that clashes with intuition.
In [Gla20b] I enriched temporal logic judgements P |= ϕ, saying that system P satisfies formula ϕ, with a third argument B, telling which actions can be blocked by the environment (by failing to act as a synchronisation partner) and which cannot. When stipulating that the coin needs cooperation from a user, but producing the pretzel does not, the two temporal judgements can be distinguished, and only one of them holds. I also introduced a fourth argument CC-a completeness criterion-that incorporates progress, justness and fairness assumptions employed when making a temporal judgement. This yields statements of the form P |= CC B ϕ. 3 The work in [Gla20b] builds on an earlier approach from [GH15a], where judgements P |= CC B ϕ were effectively introduced. However, there they were written P |= ϕ, based on the assumption that for a given application a completeness criterion CC and a set of blocking actions B would be fixed. The idea was that at the beginning of a paper employing temporal logic, a given CC and B would be declared, after which all forthcoming judgements P |= ϕ would be interpreted as P |= CC B ϕ. The novelty of the approach in [Gla20b] is to make CC and B as variable as P and ϕ, so that in the description of a single system, temporal judgements with different values of CC and B can be combined. 3 The technical development introduces ternary judgements P |= CC ϕ as a primitive, and obtains the quaternary judgements P |= CC B ϕ by employing a completeness criterion CC(B) that itself is parametrised by a set B of blocking actions.
Suppose that P is the initial state of the example above, and G(a ⇒ Fb) is a formula that says that each action a is followed by a b. Abstracting from the completeness criterion for the moment, one has The first judgement says that whenever a coin is inserted, a pretzel will be produced, even if we operate in an environment that may never insert a coin. By taking B = {c} p, the judgement also assumes that the environment will never block the production of a pretzel.
The second judgement says that in the same environment there is no guarantee that each production of a pretzel is followed by the insertion of another coin.
The third judgement says that if we happen to run our vending machine in an environment where the user is perpetually eager to insert a new coin, after each pretzel, acceptance of the next coin is guaranteed. This is an important correctness property of the vending machine. Without such a property the machine is rather unsatisfactory. Hence a specification of the kind of vending machine one would like to have could be a combination of the first and third judgement above. This kind of specification was not foreseen in [GH15a].

Kripke Structures and Linear-time Temporal Logic
Definition 3.1. Let AP be a set of atomic predicates. A Kripke structure over AP is tuple (S, →, |=) with S a set (of states), → ⊆ S × S, the transition relation, and |= ⊆ S × AP . s |= p says that predicate p ∈ AP holds in state s ∈ S.
Here I generalise the standard definition (see for instance [HR04]) by dropping the condition of totality, requiring that for each state s ∈ S there is a transition (s, s ) ∈ →. A path in a Kripke structure is a nonempty finite or infinite sequence s 0 , s 1 , . . . of states, such that (s i , s i+1 ) ∈ → for each adjacent pair of states s i , s i+1 in that sequence. Write ρ ≤ π if path ρ is a prefix of path π. If ρ ≤ π and ρ is finite, then π ρ denotes the suffix of π that remains after removing the prefix ρ, but not the last state of ρ. The length of a path π, denoted |π| ∈ N ∪ {∞}, is the number of transitions in π; for instance l(s 0 s 1 s 2 s 3 ) = 3.
A distributed system D can be modelled as a state s in a Kripke structure K. A run of D then corresponds with a path in K starting in s. Whereas each finite path in K starting from s models a partial run of D, i.e., an initial segment of a (complete) run, typically not each path models a run. Therefore a Kripke structure constitutes a good model of distributed systems only in combination with a completeness criterion [Gla19a]: a selection of a set of paths as complete paths, modelling runs of the represented system.
The default completeness criterion, implicitly used in almost all work on temporal logic, classifies a path as complete iff it is infinite. In other words, only the infinite paths, and all of them, model (complete) runs of the represented system. This applies when adopting the condition of totality, so that each finite path is a prefix of an infinite path. Naturally, in this setting there is no reason to use the word "complete", as "infinite" will do. As I plan to discuss alternative completeness criteria in Section 5, I here already refer to paths satisfying a completeness criterion as "complete" rather than "infinite". Moreover, when dropping totality, the default completeness criterion is adapted to declare a path complete iff it either is infinite or ends in a state without outgoing transitions [DV95].
In the standard treatment of LTL [Pnu77,HR04], judgements π |= ϕ are pronounced only for infinite paths π. Here I apply the same definitions verbatim to finite paths as well. Extra care is needed only in the definition of the next-state operator Xϕ; here the condition |π| > 0 is redundant when π is infinite. One can define a weak next-state operator Yϕ by • π |= Yϕ iff |π| = 0 or π +1 |= ϕ, where π +1 is obtained from π by omitting the first state. Now Y is the dual of X, in the sense Yϕ ≡ ¬X¬ϕ and Xϕ ≡ ¬Y¬ϕ, just like F is the dual of G. Here ϕ ≡ ψ means that (π |= ϕ) ⇔ (π |= ψ) for all paths π in all Kripke structures. The distinction between strong and weak next-state operators stems from [LPZ85], where F, G, X and Y are written , , and . When only infinite paths are considered, there is no difference between X and Y.
Having given meaning to judgements π |= ϕ, as a derived concept one defines when an LTL formula ϕ holds for a state s in a Kripke structure, modelling a distributed system D, notation s |= ϕ or D |= ϕ. This is the case iff ϕ holds for all runs of D.
This definition depends on the underlying completeness criterion, telling which paths model actual system runs. In situations where I consider different completeness criteria, I make this explicit by writing s |= CC ϕ, with CC the name of the completeness criterion used. When leaving out the superscript CC I refer to the default completeness criterion, defined above.
Example 3.3. Alice, Bart and Cameron stand behind a bar, continuously ordering and drinking beer. Assume they do not know each other and order individually. As there is only one bartender, they are served sequentially. Also assume that none of them is served twice in a row, but as it takes no longer to drink a beer than to pour it, each if them is ready for the next beer as soon as another person is served.

A B C
A Kripke structure of this distributed system D is drawn on the right. The initial state of D is indicated by a short arrow. The other three states are labelled with the atomic predicates A, B and C, indicating that Alice, Bart or Cameron, respectively, has just acquired a beer. When assuming the default completeness criterion, valid LTL formulae are F(A ∨ C), saying that eventually either Alice or Cameron will get a beer, or G(A ⇒ F¬A), saying that each time Alice got a beer is followed eventually by someone else getting one. However, it is not guaranteed that Bart will ever get a beer: D |= FB. A counterexample for this formula is the infinite run in which Alice and Cameron get a beer alternatingly.
Example 3.4. Bart is the only customer in a bar in London, with a single bartender. He only wants one beer. A Kripke structure of this system E is drawn on B the right. When assuming the default completeness criterion, this time Bart gets his beer: E |= FB.
Example 3.5. Bart is the only customer in a bar in London, with a single bartender. He only wants one beer. At the same time, Alice and Cameron are in a bar in Tokyo. They drink a lot of beer. Bart is not in contact with Alice and Cameron, nor B is there any connection between the two bars. Yet, one may choose to model the drinking in these two bars as a single distributed system. A Kripke structure of this system F is drawn on the right, collapsing the orders of Alice and Cameron, which can occur before or after Bart gets a beer, into self-loops. When assuming the default completeness criterion, Bart cannot count on a beer: F |= FB.

Labelled Transition Systems, Process Algebra and Petri Nets
The most common formalisms in which to present reactive distributed systems are pseudocode, process algebra and Petri nets. The semantics of these formalisms is often given through translations into labelled transition systems (LTSs), and these in turn can be translated into Kripke structures, on which temporal formulae from languages such as LTL are interpreted. These translations make the validity relation |= for temporal formulae applicable to all these formalisms. A state in an LTS, for example, is defined to satisfy an LTL formula ϕ iff its translation into a state in a Kripke structure satisfies this formula.  Figure 2 shows a commuting diagram of semantic translations found in the literature, from pseudocode, process algebra and Petri nets via LTSs to Kripke structures. Each step in the translation abstracts from certain features of the formalism at its source. Some useful requirements on distributed systems can be adequately formalised in process algebra or Petri nets, and informally described for pseudocode, whereas LTSs and Kripke structures have already abstracted from the relevant information. An example will be FS1 on page 24. I also consider LTSs upgraded with a concurrency relation • between transitions; these will be expressive enough to formalise some of these requirements.

Labelled Transition Systems.
Definition 4.1. Let A be a set of observable actions, and let Act := A .
∪ {τ }, with τ / ∈ A the hidden action. A labelled transition system (LTS) over Act is tuple (P, Tr, src, trg, ) with P a set (of states or processes), Tr a set (of transitions), src, trg : Tr → P and : Tr → Act.
Write s α −→ s if there is a t ∈ Tr with src(t) = s ∈ P, (t) = α ∈ Act and trg(t) = s ∈ P.
In this case t goes from s to s , and is an outgoing transition of s. States s and s are the source and target of t. A path in an LTS is a finite or infinite alternating sequence of states and transitions, starting with a state, such that each transition goes from the state before it to the state after it (if any). A completeness criterion on an LTS is a set of its paths.
As for Kripke structures, a distributed system D can be modelled as a state s in an LTS upgraded with a completeness criterion. A (complete) run of D is then modelled by a complete path starting in s. As for Kripke structures, the default completeness criterion deems a path complete iff it either is infinite or ends in a deadlock, a state without outgoing transitions. An alternative completeness criterion could declare some infinite paths incomplete, saying that they do not model runs that can actually occur, and/or declare some finite paths that do not end in deadlock complete. A complete path π ending in a state models a run of the represented system that follows the path until its last state, and then stays in that state forever, without taking any of its outgoing transitions. A complete path that ends in a transition models a run in which the action represented by this last transition starts occurring but never finishes. It is often assumed that transitions are instantaneous, or at least of finite duration. This assumption is formalised through the adoption of a completeness criterion that holds all paths ending in a transition to be incomplete.
The most prominent translation from LTSs to Kripke structures stems from De Nicola & Vaandrager [DV95]. Its purpose is merely to efficiently lift the validity relation |= from Kripke structures to LTSs. It simply creates a new state halfway along any transition labelled by a visible action, and moves the transition label to that state.
Definition 4.2. Let (P, Tr, src, trg, ) be an LTS over Act = A∪{τ }. The associated Kripke structure (S, →, |=) over A is given by Ignoring paths ending within a τ -transition, which are never deemed complete anyway, this translation yields a bijective correspondence between the paths in an LTS and those in its associated Kripke structure. Consequently, any completeness criterion on the LTS induces a completeness criterion on the Kripke structure. Hence it is now well-defined when s |= CC ϕ, with s a state in an LTS, CC a completeness criterion on this LTS and ϕ an LTL formula. Petri nets are usually depicted by drawing the places as circles and the transitions as boxes, containing their label. For x, y ∈ S ∪ T there are F (x, y) arrows (arcs) from x to y. When a Petri net represents a distributed system, a global state of this system is given as a marking, a multiset of places, depicted by placing M (s) dots (tokens) in each place s. The initial state is M 0 . The behaviour of a Petri net is defined by the possible moves between markings M and M , which take place when a transition fires. In that case, the transition t consumes F (s, t) tokens from each place s. Naturally, this can happen only if M makes these tokens available in the first place. Next, the transition produces F (t, s) tokens in each place s. Definition 4.5 formalises this notion of behaviour.
A multiset over a set X is a function A :   Here I restrict myself to structural conflict nets, henceforth simply called nets, a class of Petri nets containing the safe Petri nets that are normally used to give semantics to process algebras. A completeness criterion on a net is a completeness criterion on its associated LTS. Now N |= CC ϕ is defined to hold iff M 0 |= CC ϕ in the associated LTS.

CCS.
The Calculus of Communicating Systems (CCS) [Mil90] is parametrised with sets K of agent identifiers and A of names; each X ∈ K comes with a defining equation X def = P with P being a CCS expression as defined below. Act := A . ∪Ā .
∪ {τ } is the set of actions, where τ is a special internal action andĀ := {ā | a ∈ A } is the set of co-names. Complementation is extended toĀ by settingā = a. Below, a ranges over A ∪Ā , α over Act, and X, Y over K . A relabelling is a function f : A → A ; it extends to Act by f (ā) = f (a) and f (τ ) := τ . The set T CCS of CCS expressions or processes is the smallest set including: for f a relabelling and P ∈ T CCS relabelling X for X ∈ K agent identifier The process i∈{1,2} α i .P i is often written as α 1 .P 1 + α 2 .P 2 , i∈{1} α i .P i as α 1 .P 1 and i∈∅ α i .P i as 0. The semantics of CCS is given by the transition relation → ⊆ T CCS × Act × P(C ) × T CCS , where transitions P α , C − −−− → Q are derived from the rules of Table 1. Ignoring the labels C ∈ P(C ) for now, such a transition indicates that process P can perform the action α ∈ Act and transform into process Q. The process i∈I α i .P i performs one of the actions α j for j ∈ I and subsequently acts as P j . The parallel composition P |Q executes an action from P , an action from Q, or a synchronisation between complementary actions c andc performed by P and Q, resulting in an internal action τ . The restriction operator P \L inhibits execution of the actions from L and their complements. The relabelling P [f ] acts like process P with all labels α replaced by f (α). Finally, the rule for agent identifiers says that an agent X has the same transitions as the body P of its defining equation. The standard version of CCS [Mil90] features a choice operator i∈I P i ; here I use the fragment of CCS that merely features guarded choice.
The second label of a transition indicates the set of (parallel) components involved in executing this transition. The set C of components is defined as {l, r} * , that is, the set of strings over the indicators left and right, with ε ∈ C denoting the empty string and d·C := {dσ | σ ∈ C} for d ∈ {l, r} and C ⊆ C.
Example 4.8. The process P := (X|ā.0)|ā.b.0 with X def = a.X has as outgoing transitions These components stem from Victor Dyseryn [personal communication] and were introduced in [Gla19b]. They were not part of the standard semantics of CCS [Mil90], which can be retrieved by ignoring them.
Definition 4.9. The LTS of CCS is (T CCS , Tr, src, trg, ), with Tr the set of derivable Employing this interpretation of CCS, one can pronounce judgements P |= CC ϕ for CCS processes P .

Labelled Transition Systems with Concurrency.
Definition 4.10. An LTS with concurrency (LTSC) is a tuple (P, Tr, src, trg, , • ) consisting of a LTS (P, Tr, src, trg, ) and a concurrency relation • ⊆ Tr × Tr, such that: if t ∈ Tr and π is a path from src(t) to s ∈ P such that t • v for all transitions v occurring in π, then there is a u ∈ Tr such that src(u) = s, (u) = (t) and t • u.
Informally, t • v means that the transition v does not interfere with t, in the sense that it does not affect any resources that are needed by t, so that in a state where t and v are both possible, after doing v one can still do a future variant , although there the model is more general on various counts. I do not need this generality here.
The LTS associated with CCS can be turned into an LTSC by defining that is, two transitions are concurrent iff they stem from disjoint sets of components [GH19,Gla19b]. In this LTSC, and many others, including the ones associated to nets below, • is symmetric, and thus the same as . The LTS associated with a net can be turned into an LTSC by defining (M, t) i.e., the two LTS-transitions stem from net-transitions that have no preplaces in common. Naturally, any LTSC can be turned into a LTS, and further into a Kripke structure, by forgetting • .

Progress, Justness and Fairness
With the above definitions one can pronounce judgements D |= CC ϕ for distributed systems D given as a net or a CCS expression, for instance. Through the translations of Definitions 4.7 or 4.9 one renders D as a state P in an LTS. The completeness criterion CC is given as a set of paths on that LTS. Then, using Definition 4.2, P is seen as a state in a Kripke structure, and CC as a set of paths on that Kripke structure. Here it is well-defined when P |= CC ϕ holds, and this verdict applies to the judgement D |= CC ϕ. The one thing left to explain is where the completeness criterion CC comes from. In this section I define completeness criteria CC ∈ {SF (T ), WF (T ), J, Pr , | T ∈ P(P(Tr))} on LTSs (P, Tr, src, trg, ), to be used in judgements P |= CC ϕ, for P ∈ P and ϕ an LTL formula.
These criteria are called strong fairness (SF ), weak fairness (WF ), both parametrised with a set T ⊆ P(Tr) of tasks, justness (J), progress (Pr ) and the trivial completeness criterion ( ). Justness is merely defined on LTSCs. I confine myself to criteria that hold finite paths ending within a transition to be incomplete.
Reading Example 3.3, one could find it unfair that Bart might never get a beer. Strong and weak fairness are completeness criteria that postulate that Bart will get a beer, namely by ruling out as incomplete the infinite paths in which he does not. They can be formalised by introducing a set T of tasks, each being a set of transitions (in an LTS or Kripke structure).
Definition 5.1 ([GH19]). A task T ∈ T is enabled in a state s iff s has an outgoing transition from T . It is perpetually enabled on a path π iff it is enabled in every state of π. It is relentlessly enabled on π, if each suffix of π contains a state in which it is enabled. 4 It occurs in π if π contains a transition t ∈ T .
A path π is weakly fair if, for every suffix π of π, each task that is perpetually enabled on π , occurs in π . It is strongly fair if, for every suffix π of π, each task that is relentlessly enabled on π , occurs in π .
As completeness criteria, these notions take only the fair paths to be complete. In Example 3.3 it suffices to have a task "Bart gets a beer", consisting of the three transitions leading to the B state. Now in any path in which Bart never gets a beer this task is perpetually enabled, yet never taken. Hence weak fairness suffices to rule out such paths. One has D |= WF (T ) FB.
Local fairness [GH19] allows the tasks T to be declared on an ad hoc basis for the application at hand. On this basis one can call it unfair if Bart doesn't get a beer, without requiring that Cameron should get a beer as well. Global fairness, on the other hand, distils the tasks of an LTS in a systematic way out of the structure of a formalism, such as pseudocode, process algebra or Petri nets, that gave rise to the LTS. A classification of many ways to do this, and thus of many notions of strong and weak fairness, appears in [GH19]. In fairness of directions [Fra86], for instance, each transition in an LTS is assumed to stem from a particular direction, or instruction, in the pseudocode that generated the LTS; now each direction represents a task, consisting of all transitions derived from that direction.
In [GH19] the assumption that a system will never stop when there are transitions to proceed is called progress. In Example 3.4 it takes a progress assumption to conclude that Bart will get his beer. Progress fits the default completeness criterion introduced before, i.e., |= Pr is the same as |=. Not (even) assuming progress can be formalised by the trivial completeness criterion that declares all paths to be complete. Naturally, E |= FB.
Completeness criterion D is called stronger than criterion C if it rules out more paths as incomplete. So is the weakest of all criteria, and, for any given collection T , strong fairness is stronger than weak fairness. When assuming that each transition occurs in at least one task-which can be ensured by incorporating a default task consisting of all transitions-progress is weaker than weak fairness.
Justness [GH19] is a strong form of progress, defined on LTSCs.
Definition 5.2. A path π is just if for each transition t with its source state s := src(t) occurring on π, the (or any) suffix of π starting at s contains a transition u with t • u.
Example 5.3. The infinite path π that only ever takes transition t in Example 4.8/4.11 is unjust. Namely with transition v in the rôle of the t from Definition 5.2, π contains no transition y with v y.
Informally, the only reason for an enabled transition not to occur, is that one of its resources is eventually used for some other transition. In Example 3.5 for instance, the orders of Alice and Cameron are clearly concurrent with the one of Bart, in the sense that they do not compete for shared resources. Taking t to be the transition in which Bart gets his beer, any path in which t does not occur is unjust. Thus F |= J FB.
For most choices of T found in the literature, weak fairness is a strictly stronger completeness criterion than justness. In Example 3.3, for instance, the path in which Bart does not get a beer is just. Namely, any transition u giving Alice or Cameron a beer competes for the same resource as the transition t giving Bart a beer, namely the attention of the bartender. Thus t • u, and consequently D |= J FB.

Blocking Actions
I now present reactive temporal logic by extending the ternary judgements P |= CC ϕ defined above to quaternary judgements P |= CC B ϕ, with B ⊆ A a set of blocking actions. Here A is the set of all observable actions of the LTS on which LTL is interpreted. The intuition is that actions b ∈ B may be blocked by the environment, but actions a ∈ A\B may not. The relation |= B can be used to formalise the assumption that the actions in A\B are not under the control of the user of the modelled system, or that there is an agreement with the user not to block them. Either way, it is a disclaimer on the wrapping of our temporal judgement, that it is valid only when applying the involved distributed system in an environment that may block actions from B only. The hidden action τ may never be blocked.
I will present the relations |= CC B for each choice of CC = discussed in the previous section, and each B ⊆ A. When writing P |= CC B ϕ the modifier B adapts the default completeness criterion by declaring certain finite paths complete, and the modifier CC = , Pr adapts it by declaring some infinite paths incomplete. Starting with CC = Pr , I call a path B-progressing iff it is either infinite or ends in a state of which all outgoing transitions have a label from B, and write s |= Pr B ϕ, or s |= B ϕ for short, if π |= ϕ holds for all B-progressing paths π starting in s. The completeness criterion B-progress, which takes the B-progressing to be the complete ones, says that a system may stop in a state with outgoing transitions only when they are all blocked by the environment. Note that the standard LTL interpretation |= is simply |= ∅ , obtained by taking the empty set of blocking actions.
In the presence of the modifier B, Definition 5.2 is adapted as follows: Definition 6.1. A path π is B-just if for each t ∈ Tr with (t) / ∈ B and its source state s := src(t) occurring on π, any suffix of π starting at s contains a transition u with t • u.
It doesn't matter whether (u) ∈ B. The completeness criterion B-justness takes the B-just paths to be the complete ones. Write s |= J B ϕ if π |= ϕ for all B-just paths π starting in s. For the remaining cases CC = SF (T ) and CC = WF (T ), adapt the first sentence of Definition 5.1 as follows.
Strong and weak B-fairness of paths is then defined as in Definition 5.1, but replacing "enabled" by "B-enabled". The above completes the formal definition of the validity of temporal judgements P |= CC B ϕ with ϕ an LTL formula, B ⊆ A, and either • CC = Pr and P a state in an LTS, a Petri net or a CCS expression, • CC = J and P a state in an LTSC, a Petri net or a CCS expression, • CC = WF (T ) or SF (T ) and P a state in an LTS (P, Tr, src, trg, ) with T ∈ P(P(Tr)), or P a net or CCS expression with associated LTS (P, Tr, src, trg, ) and T ∈ P(P(Tr)). Namely, in case P is a state in an LTS, it is also a state in the associated Kripke structure K. Moreover, B and CC combine into a single completeness criterion CC (B) on that LTS, which translates as a completeness criterion CC (B) on K. Now Definition 3.2 tells whether P |= CC (B) ϕ holds.
In case CC = J and P a state in an LTSC, B and J combine into a single completeness criterion J(B) on that LTSC, which is also a completeness criterion on the associated LTS; now proceed as above.
In case P is a Petri net or CCS expression, first translate it into a state in an LTS or LTSC, using Definitions 4.7 or 4.9, respectively, and proceed as above.
Temporal judgements P |= CC B ϕ, as introduced above, are not limited to the case that ϕ is an LTL formula. In [Gla20b] I show that allowing ϕ to be a CTL formula instead poses no additional complications, and I expect the same to hold for other temporal logics.
Judgements P |= CC B ϕ get stronger (= less likely true) when the completeness criterion CC is weaker, and the set B of blocking actions larger.

Translating Reactive LTL into Standard LTL
Here I translate judgements s |= CC B ϕ in reactive LTL into equivalent judgementsŝ |= ψ in traditional LTL, albeit with infinite conjunctions. The price to be paid for this is an extra dose of atomic propositions. I start with judgements s |= CC B ϕ interpreted in an LTS (P, Tr, src, trg, ), as this is where the completeness criterion CC(B) takes shape. I use a slightly different translation from LTSs to Kripke structures than the one of Definition 4.2; it inserts a state halfway along any transition, even if it is labelled τ . However, τ will not be an atomic proposition of the resulting Kripke structure K, and the new halfway states do not inherit a transition label. This change affects the bookkeeping for next-state operators, but not in a bad way.
To translate |= CC B into |=, I have to make provisions for finite CC(B)-complete paths that are Pr (∅)-incomplete, and for infinite CC(B)-incomplete paths that are Pr (∅)-complete. In order not to tackle these opposite forces in the same step, I first present a translation from reactive LTL into LTL with the trivial completeness criterion. That is, for each choice of CC from Section 5, each set B ⊆ A of blocking actions, and each LTL formula ϕ, I present Given a collection T of tasks and a set B of blocking actions, introduce for each task T ∈ T two atomic propositions en T B and oc T . Proposition en T B holds in those states of K that stem from states of s ∈ P in which the task T is B-enabled, i.e., that have an outgoing transition t ∈ T with (t) / ∈ B. Additionally, it holds in those states of K that stem from transitions t ∈ Tr such that T is B-enabled in both src(t) and trg(t). Proposition oc T holds in those states of K that stem from a transition t ∈ T ; this is where the task occurs. Now the formula holds for a path π of K exactly when π is weakly B-fair. Hence the formula WF (T ) B ⇒ ϕ says that ϕ holds on weakly B-fair paths, and one has s |= WF (T ) The formulas G(Gen ⇒ Foc) and G(GFen ⇒ Foc) stem from [GPSS80]. In the literature one sometimes find the equivalent forms FGen ⇒ GFoc and GFen ⇒ GFoc.
Progress, i.e., the case CC = Pr , can be dealt with in the same way, by recognising it as weak or strong fairness involving a single task, spanning all transitions.
To deal with B-justness, introduce atomic propositions en t and t for each transition t ∈ Tr with (t) / ∈ B. Proposition en t holds in the state src(t) only, whereas t holds in all states of K that stem from a transition u ∈ Tr with t • u. Now holds for a path of K iff it is just. Consequently, s |= J B ϕ iff s |= J B ⇒ ϕ. It remains to translate |= into |=. To this end, I transform K into K by adding a self-loop at each of its states. I also introduce a fresh atomic proposition tr , for transition, and use it to label all states in K that stem from a transitions from Tr. For each finite path π in K let π ∞ be the infinite path in K obtained by repeating the last state of π infinitely often. In case π is infinite, let π ∞ := π. Let Z be the completeness criterion on K that declares each path of the form π ∞ complete. These are exactly the infinite paths without a subsequence sst with s = t. So ∞ is a bijection between the paths of K and the Z-complete paths of K. Let Q be the transformation on LTL formula, defined by A trivial induction on ϕ shows that π |= ϕ holds in K iff π ∞ |= Q(ϕ) holds in K. This implies that s |= ϕ holds in K iff s |= Z ϕ holds in K; I will write the latter asŝ |= Z ϕ. Z-completeness can be stated as The above translation from reactive LTL into standard LTL may give the impression that reactive LTL is not more expressive than standard LTL. This conclusion is not really valid, due to the addition to the formalism of fresh atomic propositions. It is for instance widely accepted that LTL is not more expressive than CTL. Yet, if one introduces an atomic proposition p ϕ for each CTL formula ϕ, one that is declared to hold for all states that satisfy ϕ, one trivially obtains s |= ϕ iff s |= p ϕ . This would suggest that CTL can be faithfully translated into LTL, even without using any of the modal operators of LTL.

Safety Properties
A safety property is a temporal formula ϕ that holds for a path π iff it holds for all finite prefixes of π. In that case D |= ϕ iff π |= ϕ for all finite paths π starting in the initial state of D. Such a property can be thought to say that nothing bad will ever happen [Lam77]. The intuition is that a bad thing must be observable in a finite prefix of a run, so that D |= ¬ϕ iff π |= ¬ϕ for some finite path π starting in the initial state of D.
Proposition 8.1. The fragment of LTL given by the grammar describes only safety properties. Here Y is the dual of X, and ψWϕ abbreviates (ψUϕ)∨Gψ.
Proof. Let ϕ and ψ be safety properties. I show that also ϕ ∨ ψ is a safety property.
That p and ¬p are safety properties, for p ∈ AP , and that the safety properties are closed under conjunction, is trivial.
For safety properties ϕ, the reactive part of reactive temporal logic is irrelevant, as one has D |= CC B ϕ iff D |= ϕ, for all completeness criteria CC and all B ⊆ A. Namely, both hold iff π |= ϕ for all finite paths π starting in the initial state of D. This hinges on the requirement of feasibility imposed on completeness criteria [AFK88, GH19]: any finite partial run can be extended to a complete run; or any finite path must be a prefix of a complete path.

Part II. Formalising Mutual Exclusion and Fair Scheduling in Reactive LTL
Here I recall the mutual exclusion problem as posed by Dijkstra [Dij65], and the related notion of a fair scheduler [GH15a]. Employing reactive LTL, I formulate requirements that tell exactly what does and does not count as a mutual exclusion protocol, and as a fair scheduler. Since my requirements are parametrised by completeness criteria, which are progress, justness or fairness assumptions, I obtain a hierarchy of quality criteria for mutual exclusion protocols and fair schedulers, where a weaker completeness criterion characterises a higher quality protocol. When allowing (strong or) weak fairness as parameter in my requirements, an intuitively unsatisfactory mutual exclusion protocol or fair scheduler, which I call the gatekeeper, meets all requirements. This indicate that weak fairness is too strong an assumption to be used in these parameters.

The Mutual Exclusion Problem and its History
The mutual exclusion problem was presented by Dijkstra in [Dij65] and formulated as follows: "To begin, consider N computers, each engaged in a process which, for our aims, can be regarded as cyclic. In each of the cycles a so-called "critical section" occurs and the computers have to be programmed in such a way that at any moment only one of these N cyclic processes is in its critical section. In order to effectuate this mutual exclusion of critical-section execution the computers can communicate with each other via a common store. Writing a word into or nondestructively reading a word from this store are undividable operations; i.e., when two or more computers try to communicate (either for reading or for writing) simultaneously with the same common location, these communications will take place one after the other, but in an unknown order." Dijkstra proceeds to formulate a number of requirements that a solution to this problem must satisfy, and then presents a solution that satisfies those requirements. The most central of these are: • (Mutex ) "no two computers can be in their critical section simultaneously", and • (Deadlock-freedom) if at least one computer intends to enter its critical section, then at least one "will be allowed to enter its critical section in due time".
Two other important requirements formulated by Dijkstra are • (Speed independence) "(b) Nothing may be assumed about the relative speeds of the N computers", • and (Optionality) "(c) If any of the computers is stopped well outside its critical section, this is not allowed to lead to potential blocking of the others." A crucial assumption is that each computer, in each cycle, spends only a finite amount of time in its critical section. This is necessary for the correctness of any mutual exclusion protocol.
For the purpose of the last requirement one can partition each cycle into a critical section, a noncritical section (in which the process starts), an entry protocol between the noncritical and the critical section, during which a process prepares for entry in negotiation with the competing processes, and an exit protocol, that comes right after the critical section and before return to the noncritical section. Now "well outside its critical section" means in the noncritical section. Requirement (c) can equivalently be stated as admitting the possibility that a process chooses to remain forever in its noncritical section, without applying for entry in the critical section ever again.
Knuth [Knu66] proposes a strengthening of the deadlock-freedom requirement, namely • (Starvation-freedom) If a computer intends to enter its critical section, then it will be allowed to enter in due time. He also presents a solution that is shown to satisfy this requirement, as well as Dijkstra's requirements. 6 Henceforth I define a correct solution of the mutual exclusion problem as one that satisfies both mutex and starvation-freedom, as formulated above, as well as optionality. I speak of "speed-independent mutual exclusion" when also insisting on requirement (b) above.
The special case of the mutual exclusion problem for two processes (N = 2) was presented by Dijkstra in [Dij63], two years prior to [Dij65]. There Dijkstra presented a solution found by T.J. Dekker in 1959, and shows that it satisfies all requirements of [Dij65]. Although not explicitly stated in [Dij63], the arguments given therein imply straightforwardly that Dekker's solution also satisfy Knuth's starvation-freedom requirement above.
Peterson [Pet81] presented a considerable simplification of Dekker's algorithm that satisfies the same correctness requirements. Many other mutual exclusion protocols appear in the literature, the most prominent being Lamport's bakery algorithm [Lam74] and Szymański's mutual exclusion algorithm [Szy88]. These guarantee some additional correctness criteria besides the ones discussed above.

Fair Schedulers
In [GH15b] a fair scheduler is defined as "a reactive system with two input channels: one on which it can receive requests r 1 from its environment and one on which it can receive requests r 2 . We allow the scheduler to be too busy shortly after receiving a request r i to accept another request r i on the same channel. However, the system will always return to a state where it remains ready to accept the next request r i until r i arrives. In case no request arrives it remains ready forever. The environment is under no obligation to issue requests, or to ever stop issuing requests. Hence for any numbers n 1 and n 2 ∈ N ∪ {∞} there is at least one run of the system in which exactly that many requests of type r 1 and r 2 are received. Every request r i asks for a task t i to be executed. The crucial property of the fair scheduler is that it will eventually grant any such request. Thus, we require that in any run of the system each occurrence of r i will be followed by an occurrence of t i ." "We require that in any partial run of the scheduler there may not be more occurrences of t i than of r i , for i = 1, 2.
The last requirement is that between each two occurrences of t i and t j for i, j ∈ {1, 2} an intermittent activity e is scheduled." This fair scheduler serves two clients, but the concept generalises smoothly to N clients.
The intended applications of fair schedulers are for instance in operating systems, where multiple application processes compete for processing on a single core, or radio broadcasting stations, where the station manager needs to schedule multiple parties competing for airtime. In such cases each applicant must get a turn eventually. The event e signals the end of the time slot allocated to an application process on the single core, or to a broadcast on the radio station.
Fair schedulers occur (in suitable variations) in many distributed systems. Examples are First in First out 7 , Round Robin, and Fair Queueing scheduling algorithms 8 as used in network routers [Nag85,Nag87] and operating systems [Kle64], or the Completely Fair Scheduler, 9 which is the default scheduler of the Linux kernel since version 2.6.23.
Each action r i , t i and e can be seen as a communication between the fair scheduler and one of its clients. In a reactive system such communications will take place only if both the fair scheduler and its client are ready for it. Requirement FS1 of a fair scheduler quoted above effectively shifts the responsibility for executing r i to the client. The actions t i and e, on the other hand, are seen as the responsibility of the fair scheduler. We do not consider the possibility that the fair scheduler fails to execute t i merely because the client does not collaborate. Hence [GH15b] assumes that the client cannot prevent the actions t i and e from occurring. It is furthermore assumed that executing the actions r i , t i and e takes a finite amount of time only.
A fair scheduler closely resembles a mutual exclusion protocol. However, its goal is not to achieve mutual exclusion. In most applications, mutual exclusion can be taken for granted, as it is physically impossible to allocate the single core to multiple applications at the same time, or the (single frequency) radio sender to multiple simultaneous broadcasts. Instead, its goal is to ensure that no applicant is passed over forever.
It is not hard to obtain a fair scheduler from a mutual exclusion protocol. For suppose we have a mutual exclusion protocol M , serving two processes P i (i = 1, 2). I instantiate the noncritical section of Process P i as patiently awaiting the request r i . As soon as this request arrives, P i leaves the noncritical section and starts the entry protocol to get access to the critical section. Starvation-freedom guarantees that P i will reach its critical section. Now the critical section consists of scheduling task t i , followed by the intermittent activity e. Trivially, the composition of the two processes P i , in combination with protocol M , constitutes a fair scheduler, in that it meets the above four requirements.
One can not quite construct a mutual exclusion protocol from a fair scheduler, due to fact that in a mutual exclusion protocol leaving the critical section is controlled by the client process. For this purpose one would need to adapt the assumption that the client of a fair scheduler cannot block the intermittent activity e into the assumption that the client can postpone this action, but for a finite amount of time only. In this setting one can build a mutual exclusion protocol, serving two processes P i (i = 1, 2), from a fair scheduler F . Process i simply issues request r i at F as soon as it has left the noncritical section, and when F communicates the action t i , Process i enters its critical section. Upon leaving its critical section, which is assumed to happen after a finite amount of time, it participates in the synchronisation e with F . Trivially, this yields a correct mutual exclusion protocol.

Formalising the Requirements for Fair Schedulers in Reactive LTL
The main reason fair schedulers were defined in [GH15b] was to serve as an example of a realistic class of systems of which no representative can be correctly specified in CCS, or similar process algebras, or in Petri nets. Proving this impossibility result necessitated a precise formalisation of the four requirements quoted in Section 10. Through the provided translations of CCS and Petri nets into LTSs, a fair scheduler rendered in CCS or Petri nets can be seen as a state F in an LTS over the set {r i , t i , e | i = 1, 2} of visible actions; all other actions can be considered internal and renamed into τ .
Let a partial trace of a state s in an LTS be the sequence of visible actions encountered on a path starting in s [Gla93]. Now the last two requirements (FS3 and FS4) of a fair scheduler are simple properties that should be satisfied by all partial traces σ of state F : (FS 3) σ contains no more occurrences of t i than of r i , for i = 1, 2, (FS 4) σ contains an occurrence of e between each two occurrences of t i and t j for i, j ∈ {1, 2}. FS4 can be conveniently rendered in LTL: Since FS4 is a safety property, it makes no difference whether and how |= is annotated with B and CC. In [Lam83], Lamport argues against the use of the nextstate operator X, as it is incompatible with abstraction from irrelevant details in system descriptions. When following this advice, the weak next-state operator Y in FS4 can be replaced by t i W; on Kripke structures distilled from LTSs the meaning is the same.
Unfortunately, FS3 cannot be formulated in LTL, due to the need to keep count of the difference in the number of r i and t i actions encountered on a path. However, one could strengthen FS3 into (FS 3 ) σ contains an occurrence of r i between each two occurrences of t i , and prior to the first occurrence of t i , for i ∈ {1, 2}. This would restrict the class of acceptable fair schedulers, but keep the most interesting examples. Consequently, the impossibility result from [GH15b] applies to this modified class as well. FS3 can be rendered in LTL in the same style as FS4: Requirement FS2 involves a quantification over all complete runs of the system, and thus depends on the completeness criterion CC employed. It can be formalised as The set B should contain r 1 and r 2 , as these actions are supposed to be under the control of the users of a fair scheduler. However, actions t 1 , t 2 and e should not be in B, as they are under the control of the scheduler itself. In [GH15b], the completeness criterion employed is justness, so the above formula with CC := J captures the requirement on the fair schedulers that are shown in [GH15b] not to exist in CCS or Petri nets. However, keeping CC a variable allows one to pose to the question under which completeness criterion a fair scheduler can be rendered in CCS. Naturally, it needs to be a stronger criterion than justness. In [GH15b] it is shown that weak fairness suffices.
FS2 is a good example of a requirement that can not be rendered correctly in standard LTL. Writing F |= CC G(r i ⇒ Ft i ) would rule out the complete runs of F that end because the user of F never supplies the input r j ∈ B. The CCS process for instance satisfies this formula, as well as FS3 and 4; yet it does not satisfy Requirement FS2. Namely, the path consisting of the r 1 -transition only is complete, since it ends in a state of which the only outgoing transition has the label r 2 ∈ B. It models a (complete) run that can occur when the environment never issues a request r 2 . Yet on this path r 1 is not followed by t 1 .
Requirement FS1 is by far the hardest to formalise. In [GH15b] two formalisations are shown to be equivalent: one involving a coinductive definition of B-just paths that exploits the syntax of CCS, and the other requiring that Requirements FS2-4 are preserved under putting an input interface around Process F . The latter demands that also should satisfy FS2-4; here f is a relabelling with f (r i ) = c i , f (t i ) = t i and f (e) = e for i = 1, 2, and I i def = r i .c i .I i for i ∈ {1, 2}. A formalisation of FS1 on Petri nets also appears in [GH15b]: each complete path π with only finitely many occurrences of r i should contain a state (= marking) M , such that there is a transition v with (v) = r i and • v ≤ M , and for each transition u that occurs in π past M one has • v ∩ • u = ∅.
When discussing proposals for fair schedulers by others, FS1 is the requirement that is most often violated, and explaining why is not always easy.
In reactive LTL, this requirement is formalised as B\{r i } GFr i if one wants to discuss the completeness criterion CC as a parameter. The surprising element in this temporal judgement is the subscript B\{r i } = {r 3−i }, which contrasts with the assumption that requests are under the control of the environment. FS1 says that, although we know that there is no guarantee that user i of F will ever issue request r i , under the assumption that the user does want to make such a request, making the request should certainly succeed. This means that the protocol itself does not sit in the way of making this request.
The combination of Requirements FS1 and 2, which use different sets of blocking actions as a parameter, is enabled by reactive LTL as presented here.
The following examples, taken from [GH15b], show that all the above requirements are necessary for the result from [GH15b] that fair schedulers cannot be rendered in CCS.
• The CCS process F 1 |F 2 with F i def = r i .t i .e.F i satisfies FS1, FS2 and FS3 . In FS1 and 2 one needs to take CC := J, as progress is not a strong enough assumption here.
• The process F 0 with F 0 def = r 1 .t 1 .e.F 0 + r 2 .t 2 .e.F 0 satisfies FS2-4. Here FS2 merely needs CC := Pr , that is, the assumption of progress. Furthermore, it satisfies FS1 with CC := SF (T ), as long as r 1 , r 2 ∈ T . Here r i is the set of transitions with label r i .

The process X given by
r 1 e e on the right. The grey shadows represent copies of the states at the opposite end of the diagram, so the transitions on the far right and bottom loop around. This process satisfies FS 3 and 4, FS2 with CC := Pr , and FS1 with CC := WF (T ), thereby improving Process F 0 , and constituting the best CCS approximation of a fair scheduler seen so far. Yet, intuitively FS1 is not ensured at all, meaning that weak fairness is too strong an assumption. Nothing really prevents all the choices between r 2 and any other action a to be made in favour of a.

Formalising Requirements for Mutual Exclusion in Reactive LTL
Define a process i participating in a mutual exclusion protocol to cycle through the stages noncritical section, entry protocol, critical section, and exit protocol, in that order, as explained in Section 9. Modelled as an LTS, its visible actions will be en i , ln i , ec i and lc i , of entering and leaving its (non)critical section. Put ln i in B to make leaving the critical section a blocking action. The environment blocking it is my way of allowing the client process to stay in its noncritical section forever. This is the manner in which the requirement optionality is captured in reactive temporal logic. On the other hand, ec i should not be in B, for one does not consider the starvation-freedom property of a mutual exclusion protocol to be violated simply because the client process refuses to enter the critical section when allowed by the protocol. Likewise, en i is not in B. Although exiting the critical section is in fact under control of the client process, it is assumed that it will not stay in the critical section forever. In the models of this paper this can be simply achieved by leaving lc i outside B. Hence B := {ln i | i = 1, . . . , N }. My first requirement on mutual exclusion protocols P simply says that the actions en i , ln i , ec i and lc i have to occur in the right order: The second is a formalisation of mutex, saying that only one process can be in its critical section at the same time: (ME 2) P |= G ec i ⇒ (¬ec j )Wlc i for i, j = 1, . . . , N with i = j. Both ME 1 and ME 2 are safety properties, and thus unaffected by changing |= into |= CC or |= CC B . The starvation-freedom requirement of Section 9 can be formalised as (ME 3 CC ) P |= CC B G(ln i ⇒ Fec i ) Here the choice of a completeness criterion is important. Finally, the following requirements are similar to starvation-freedom, and state that from each section in the cycle of a process i, the next section will in fact be reached. In regards to reaching the end of the noncritical section, this should be guaranteed only when assuming that the process wants to leave its noncritical section; hence ln i is excepted from B.
The requirement speed independence is automatically satisfied for models of mutual exclusion protocols rendered in any of the formalisms discussed so far, as these formalisms lack the expressiveness to make anything dependent on speed.
The following examples show that none of the above requirements are redundant. • The CCS process F 1 |F 2 | · · · |F N with F i def = ln i .ec i .lc i .en i .F i satisfies all requirements, with CC := J, except for ME 2.
• The process R 1 |R 2 | · · · |R N with R i def = ln i .0 satisfies all requirements except for ME 3.

State-oriented Requirements for Mutual Exclusion
Some readers may prefer a state-oriented view of mutual exclusion over the action-oriented view of Section 12. In such a view, the mutex requirement (ME2 in Section 12) simply says that different processes i and j cannot be in the critical section at the same time, rather then encoding this in terms of the actions of entering and leaving the critical section. This section translates the requirements on mutual exclusion from the action-oriented view of Section 12 to a state-oriented view. Let's model the protocol with a Kripke structure that features the atomic predicates C i and I i , for i = 1, 2. Predicate C i holds when Process i is in its critical section, and I i when Process i intends to enter its critical section, but isn't there yet. Predicate I i should thus hold when Process i has left the noncritical section and is executing its entry protocol. In this presentation one can lump together the exit protocol and the noncritical section of process i; these comprise the states satisfying ¬(I i ∨ C i ). Now Requirements ME1-6 can be reformulated as follows, for i, j = 1, . . . , N .
Here B can be rendered as a set of atomic predicates p, and refers to all those "blocking transitions" that go from a state where p does not hold to one where p holds, for some p ∈ B. So here B = {I i | i = 1, . . . , N }. In this setting, the gatekeeper is depicted on the right. It satisfies ME1-4 above, with CC = P r, but ME6 only with CC = WF .

A Hierarchy of Quality Criteria for Mutual Exclusion Protocols
Formalising the quality criteria for fair schedulers as FS1-4, one sees that, unlike FS3 and 4, Requirements FS1 and 2 are parametrised by the choice of a completeness criterion CC. In each of FS1 and FS2, CC can be instantiated with either , Pr , J, WF (T ) or SF (T ) for a suitable collection of tasks T . When seeing WF (T ) or SF (T ) as single choices, allowing them to utilise the most appropriate choice of T , this yields a hierarchy of 5 × 5 = 25 different quality criteria for fair schedulers, partially depicted in Figure 3. Here "Request" indicates the completeness criterion used in Requirement FS1 and "Granting" the one taken in FS2. Note that quality criteria encountered further up in the figure employ stronger fairness assumptions, and thus yield weaker, or less impressive, fair schedulers.
I have not rendered all 25 quality criteria in Figure 3, as many are irrelevant. Since no meaningful liveness property holds when merely assuming the trivial completeness criterion , one can safely discard it from consideration; there can exist no fair scheduler satisfying FS1 or 2 with CC = . Likewise, one can forget about the possibility "Request: Pr". As an infinite run such as (r 2 t 2 e) ∞ in which never a request r 1 is received, and consequently never a request-granting action t 1 occurs, should be a complete run of the system, progress is not strong enough an assumption to ensure that when user 1 wants to issue request r 1 it will actually succeed. The least one should assume here is justness.
At the other end of the hierarchy I have dropped the choice SF. The reason is that there turn out to be completely satisfactory solutions merely assuming weak fairness in either dimension; so the choice of strong fairness makes the fair scheduler unnecessary weak.
The same hierarchy of quality criteria applies to mutual exclusion protocols. Now "Request" indicates the completeness criterion used in Requirement ME6 and "Granting" the one taken in ME3. The latter concerns the starvation-freedom property of mutual exclusion; it indicates how hard it is to reach the critical section after a process' interest in doing this has been expressed. The former indicates how hard it is to express such an interest in the first place. Again, the choice "Request: Pr" can be discarded, as no mutual exclusion protocol can meet this requirement. This is due to the infinite run (ln 2 ec 2 lc 2 en 2 ) ∞ , in which process 1 never requests access to the critical section. When merely assuming progress, one cannot tell whether this is because process 1 does not want to leave its noncritical section, or because it wants to but doesn't succeed, as always another action is chosen.
In principle, there are two more dimensions in classifying the quality criteria for mutual exclusion protocols, namely the choice of a completeness criterion for Requirements ME4 and 5. These indicate how hard it is to leave the critical section after entering, and to enter the noncritical section after leaving the critical one, respectively. As both tasks are really easy to accomplish, these two dimensions are not indicated in Figure 3. Nevertheless, they should not be forgotten in the forthcoming analysis. There are no further dimensions for ME1 and 2, as these are safety properties.
When allowing weak fairness in the "request" dimension, the gatekeeper, described for fair schedulers in Section 11 and for mutual exclusion in Section 12, is a good solution. It merely requires progress in the "granting" dimension, and for mutual exclusion also in ME4 and 5. As most researchers in the area of mutual exclusion would agree that nevertheless the gatekeeper is not an acceptable protocol, we have evidence that weak fairness in the "request" dimension is too strong an assumption.

An Input Interface for Implementing ME6
In [GH15b, Section 13] an input interface is proposed that can be put around any potential fair scheduler expressed in CCS-it was recalled in Section 11. It was shown that a process F satisfies Requirements FS1-4 iff the encapsulated process F -the result of putting F in this interface-satisfies FS2-4. Here I propose a similar interface for mutual exclusion protocols.
Definition 15.1. For any expression P , let P : Observation 15.2. Suppose that P satisfies ME1, and criterion CC is at least as strong as justness. Then P satisfies ME1 as well as ME6 J , and • P satisfies ME2 iff P satisfies ME2.
• P satisfies ME3 CC iff P satisfies ME6 CC and ME 3 CC , Let the encapsulated gatekeeper H be the result of putting this input interface around the gatekeeper for mutual exclusion. It can be described as follows, and its labelled transition system is depicted in Figure 4. Here I added subscripts to τ -action to indicate their origins.

Figure 4: Encapsulated gatekeeper
This mutual exclusion protocol satisfies ME6 J , for as soon as en i has occurred (and in the initial state) Process I i is in its initial state ln i .c i .I i , and nothing stands in the way of the action ln i . In other words, justness is a strong enough assumption for ln i to occur. Clearly, the protocol also satisfies ME1 and 2, as well as ME4 Pr and 5 Pr . The only downside is that it takes weak fairness to achieve ME3, starvationfreedom. This assumption is needed to assure that the synchronisation between actionsc i and c i will actually occur. Intuitively, the encapsulated gatekeeper is an equally unacceptable mutual exclusion protocol as the gatekeeper, for the input interface ought to make no difference. This shows that weak fairness in any dimension of Figure 3 is too strong an assumption. However, due to the im-possibility result of [GH15b], the two remaining entries of Figure 3 cannot be realised in CCS. Theoretically, that result leaves open the possibility of achieving justness in both the dimensions "request" and "granting", at the expense of assuming weak fairness for ME4 or 5. I do not think this is actually possible, and even if it were, a solution that requires weak fairness to escape the critical section, or to enter the noncritical one, appears equally unacceptable as the (encapsulated) gatekeeper.

Part III. Impossibility Results for Peterson's Mutual Exclusion Algorithm
Here I recall three impossibility results for mutual exclusion protocols that have been shown or claimed earlier, and illustrate or substantiate them for Peterson's mutual exclusion protocol. I could have equally well done this for another mutual exclusion protocol, such as Lamport's bakery algorithm [Lam74], Szymański's mutual exclusion algorithm [Szy88] or the round-robin scheduler. 10 My reason for choosing Peterson's protocol in this paper is that it is (one of) the simplest of all mutual exclusion protocols.
The first impossibility result stems from [Vog02,KW97,GH15b], and says that in Petri nets, and in CCS and similar process algebras, it is not possible to model a mutual exclusion protocol in such a way that it is correct without making an assumption as strong as weak fairness. In [Vog02] this is shown for finite Petri nets, and in [KW97] for a class of Petri nets that interact with their environment through an interface of a particular shape, similar to the one of Section 15. In [GH15b] the same is shown for all structural conflict nets (and thus for all safe nets), as well as for the process algebra CCS, with strong hints on how the result extends to many similar process algebras. For the latter result, either the concurrency relation between CCS transitions defined in Section 4.4, or directly the resulting concept of a just path, needs to be seen as an integral part of CCS. In Sections 18 and 19 I will illustrate this impossibility result for a rendering of Peterson's protocol as a Petri net and as a CCS expression, respectively. In [GH15b,GH19] moreover the point of view is defended that assuming (strong or weak) fairness is typically unwarranted, in the sense that there is no reason to assume that reality will behave in a fair way. From this point of view, a model of a mutual exclusion that hinges on a fairness assumption can be seen as incorrect or unsatisfactory. This makes the above into a real impossibility result.
The second impossibility result was claimed in [Gla18], but unaccompanied by written evidence. It blames the first impossibility result above on the combination of two assumptions or protocol features, which I here call atomicity and speed independence. Atomicity, or rather the special case of atomicity that is relevant for the second impossibility result, will be formally defined as (1) in Section 20. It can be seen as an assumption on the behaviour of the hardware on which an implementation of a mutual exclusion protocol will be running. Atomicity is explicitly assumed in the original paper of Dijkstra where the mutual exclusion problem was presented [Dij65], and implicitly in many other papers on mutual exclusion, 10 As a mutual exclusion protocol, the round-robin is a central scheduler that grants access to the critical section to N processes numbered 1 − N by cycling through all competing processes in the order 1 − N . Each time it is the turn of Process i, the round-robin scheduler checks whether Process i wants to enter the critical section, and if so, grants access. When Process i leaves the critical section, or if it didn't want to enter, it will be the turn of Process i+1 mod N .
When confronted with the claim that under some natural assumptions no correct mutual exclusion protocols exist, some people reply that in that case one could always use a round-robin scheduler, as if this somehow constitutes an exception.
1 leave noncritical section 2 readyA := true 3 turn := B 4 await (readyB = false ∨ turn = A) 5 enter critical section 6 leave critical section 7 readyA := false 8 enter noncritical section  but not in the work of Lamport [Lam74]. Speed independence can either be seen as an assumption on the underlying hardware, or as a feature of a mutual exclusion protocol. The assumption stems from Dijkstra [Dij65] and was quoted in Section 9. The claims of [Gla18] employ a rather strict interpretation of speed independence, illustrated in Example 22.1.
In a setting where solutions based on the assumption of weak fairness are rejected, as well as solutions that are merely probabilistically correct, [Gla18] claims that when assuming atomicity, speed-independent mutual exclusion is impossible. This means that when assuming atomicity and speed independence, there is no mutual exclusion protocol satisfying ME1-6 with CC = J. The assumption of speed independence is built in in CCS and Petri nets, in the sense that any correct mutual exclusion protocol formalised therein is automatically speed independent. This is because these models lack the expressiveness to make anything dependent on speed. When taking the concurrency relation between CCS or net transitions defined in Section 4.4 as an integral part of semantics of CCS or Petri nets, also the assumption of atomicity is built in in these frameworks. This makes the first impossibility result a special case of the second. The latter can be seen as a generalisation of the former that is not dependent on a particular modelling framework. In Section 22 I will substantiate the above claim of [Gla18] for the special case of Peterson's protocol.
The third impossibility result, also claimed in [Gla18], says that when dropping the assumption of atomicity, but keeping speed independence, there still exists no mutual exclusion protocol satisfying ME3 Pr . That is, the assumption of progress is not strong enough to obtain starvation-freedom of any speed-independent mutual exclusion protocol. I will substantiate this claim for the special case of Peterson's protocol in Section 16. In Part IV I aim for a formalisation of Peterson's protocol that satisfies ME3 Pr . It follows that there I will have to drop speed independence.

Peterson's Mutual Exclusion Protocol
A pseudocode rendering of Peterson's protocol is depicted in Figure 5. The two processes, here called A and B, use three shared variables: readyA, readyB and turn. The Boolean variable readyA can be written by Process A and read by Process B, whereas readyB can be written by B and read by A. By setting readyA to true, Process A signals to Process B that it wants to enter the critical section. The variable turn can be written and read by both processes. Its carefully designed functionality guarantees mutual exclusion as well as deadlock-freedom. Both readyA and readyB are initialised with false and turn with A.  Figure 6, are labelled τ . When assuming speed independence, none of the paths in Figure 6 can be ruled out due to timing considerations. This makes Figure 6 an adequate rendering of Peterson's protocol.
From the pseudocode in Figure 5 one sees immediately that Peterson's protocol satisfies ME1 (all visible actions occur in the right order) and ME6 J (nothing stands in the way of a process leaving its noncritical section). By inspecting the LTS in Figure 6 one sees that it moreover satisfies ME2 (the mutual exclusion property) and ME4 Pr (assuming progress and willingness to do so is enough to ensure that a process will leave its critical section after entering). One obtains ME5 J (assuming justness suffices to ensure that a process always enters its noncritical section after leaving its critical section) by combining the code and the LTS. The LTS shows that assuming progress suffices to ensure that lc B is always followed by m 7 . The code shows that assuming justness suffices to ensure that m 7 is always followed by ln B . Of course the same applies to Process A.
More problematic is Requirement ME3, starvation-freedom. Thanks to symmetry, I may restrict attention to ME3 for Process A. Will 1 = ln A always be followed by 4 = en A ? The LTS shows that 2 is always followed by 4 = en A , even when merely assuming progress. However, it is less clear whether 1 = ln A is always followed by 2 . The only 11 progressing path π P on which is 1 not followed by 2 visits state A 2 m 4 infinitely often and always takes the transition going right. This path witnesses that progress is not strong enough an assumption to ensure ME3, whereas weak fairness is, provided all the transitions stemming from Instruction 2 form a task. 12 Whether justness is a strong enough assumption for ME3 depends solely on the question whether π P is just.
To answer that question one can interpret the LTS of Figure 6 as an LTSC, by investigating an appropriate concurrency relation • on the transitions. Whether two transitions are concurrent ought to depend solely on the instructions i and m j from Figure 5 that gave rise to these transitions, that is, whether i m j for i, j = 1, . . . , 8. Assuming that the three variables readyA, readyB and turn are stored in independent stores or registers, the only pairs that may violate i m j are 3 m 3 , 2 m 4 , 3 m 4 , 7 m 4 , 4 m 2 , 4 m 3 and 4 m 4 , for these instruction compete for access to the same register. 13 The only pair out of these 7 that affects the justness of π P is 2 m 4 . Considering that in State A 2 m 4 it is possible to perform Instruction m 4 , moving to State A 2 m 5 , yet in State A 3 m 4 , thus after performing 2 , it is no longer possible to perform Instruction m 4 , one surely has m 4 • 2 , that is, Instruction m 4 is affected by 2 -compare (4.2). On the other hand, one cannot derive from Figure 6 alone whether 2 • m 4 . In case one decides that 2 • m 4 then π P is not B-just by Definition 6.1; as nothing blocks the execution of Instruction 2 , it must eventually occur. In this case ME3 J holds. However, in case one decides that 2 • m 4 , then π P is just and ME3 J does not hold.
Sections 18-19 examine whether 2 • m 4 holds in renderings of this protocol as a Petri net and in CCS. In Section 20 I will reflect on whether 2 • m 4 holds, and thus on whether Peterson's protocol satisfies ME3 J , based on a classification as to what could happen if two processes try to access the same register at the same time, one writing and one reading.

Verifications of Starvation-Freedom Merely Assuming Progress
Above, I argued that in any formalisation P of Peterson's algorithm that is consistent with the LTS of Figure 6-including any speed-independent formalisation-one has P |= Pr B 11 when considering two paths essentially the same if they differ merely on a finite prefix 12 Formally, Peterson |= WF (T ) B G(lni ⇒ Feci) when l2, m2 ∈ T , where l2 (resp. m2) contains all transitions stemming from instruction 2 (resp. m2). Namely, πP fails to be weakly fair, for it has a suffix on which task l2 is perpetually enabled but never occurs. 13 The verdicts 1 mj, 8 mj, i m1 and i m8 were already used implicitly in the above derivation of ME 6 J and ME 5 J from the pseudocode. G(ln i ⇒ Fec i ), that is, Requirement ME3 Pr does not hold, or, the formalisation P does not satisfy starvation-freedom when merely assuming progress. The path π P from Section 16 constitutes a counterexample. In Part IV of this paper I will bypass this verdict by proposing a formalisation of Peterson's algorithm that is not consistent with the LTS of Figure 6.
In [Wal89], Peterson's algorithm was formalised in a way that is consistent with Figure 6, yet starvation-freedom was proven, even by automatic means, while assuming no more than progress. The contradiction with the above is only apparent, because the starvation-freedom property obtained by [Wal89] can be stated as P |= Pr B G( 2 ⇒ Fec A ), and symmetrically P |= Pr B G(m 2 ⇒ Fec B ), that is, once a process expresses its intention to enter its critical section, by executing Instruction 2 (or m 2 ), then it will surely reach its critical section. 14 The same can be said for the verification of starvation-freedom of Peterson's algorithm in [VS96], and, in essence, for the verification of starvation-freedom of Dekker's algorithm in [EB96].
It can be (and has been) debated which is the better formalisation of starvation-freedom. Since the greatest hurdle in the protocol is Instruction 2 , it seems unfair to say that a process is only deemed interested in entering the critical section when this hurdle is taken. 15 Another tactic is to consider a form of Peterson's protocol in which Processes A and B are merely interfaces that interact with the real clients that compete for access to the critical section by synchronising on the actions ln A , ln B , ec A , ec B , lc A , lc B , en A and en B . In such a case, the real intent of Client A to enter its critical section is expressed in a message to Interface A that occurs strictly before Interface A executes Instruction 2 .
Such arguments are less necessary now that I have formalised ME6 as an additional requirement. Suppose that one would redefine the action ln A that appears in the antecedent of the starvation-freedom requirement ME3 as the occurrence of instruction 2 from Peterson's algorithm, thus turning ME3 for Process A into P |= CC B G( 2 ⇒ Fec A ), then ME6 becomes P |= CC B\{ 2 } F 2 ∧ G(en A ⇒ F 2 ). Either way, it is required that 1 will be followed by 2 ; this requirement is either part of ME6 or of ME3. In case it takes weak fairness to perform Instruction 2 , either ME3 or ME6 will hold only for CC = WF , and depending on one's modelling preferences one can choose which one.
In Sections 18 and 19 I will show that in formalisations of Peterson's algorithm as a Petri net or a CCS expression, the path π P from Section 16 is just, implying that it takes weak fairness to perform Instruction 2 . This implies that those formulations can in terms of Figure 3 be situated either at the coordinates (WF , Pr ) or (J , WF ), depending on one's preferred modelling of intent to enter the critical section. Either way, such a rendering of Peterson scores no better than the (encapsulated) gatekeeper.
14 In the literature [SGG12,Ray13] it is frequently claimed that once Process A expresses its intention to enter its critical section, by executing Instruction 2, Process B will enter the critical section at most once before A does (and the same with A and B reversed, of course). A counterexample is provided by the path that turns right as much as possible from the state A 3 m 5 . Here Process A has just executed 2, but Process B enters the critical section twice before A does. (By the definitions in [Ray13], in state A 3 m 5 the two processes are competing, and A loses the competition twice.) 15 Suppose I promise all of you $1000, if only you express interest in getting it, by filling in Form 316F. Then I implement this promise by making it impossible to fill in Form 316F. In that case you might argue against the claim that the Requirement G("has interest" ⇒ F"receive $1000") can be verified. The argument would be that filling in Form 316F is an inadequate formalisation of having interest in getting this price. Figure 7 shows a rendering of Peterson's protocol as a Petri net. There is one place for each of the local states 1 -8 and m 1 -m 8 of Processes A and B and two for each of the Boolean variables readyA, readyB and turn. There is one transition for each of the instructions 1 -8 and m 1 -m 8 , except that 3 , m 3 , 4 and m 4 yield two transitions each. For 3 and m 3 this is to deal with each of the possible values of turn before the assignment is executed; for 4 the two transitions are for reading that turn = A, and for reading that readyB = false. The transitions 2 and m A 4 are not concurrent, because they compete for the same token on the place readyA = false. For this reason the run π P , in which one token is stuck in place 2 while the other four tokens keep moving around-with m A 4 executed infinitely many times-is just. Consequently, this Petri net does not satisfy requirement ME3 J .

Modelling Peterson's Protocol in CCS
In order to model Peterson's mutual exclusion protocol in CCS, I use the names en A , en B , ln A , ln B , ec A , ec B , lc A and lc B for Processes A and B entering and leaving their (non)critical section. Following [DGH17], I describe a simple shared memory system in CCS, using the name asgn v x for the assignment of value v to the variable x, and n v x for noticing or notifying that the variable x has the value v. The action asgn v x communicates the assignment x := v to the shared memory, whereas asgn v x is the action of the shared memory of accepting this communication. Likewise, n v x is a notification by the shared memory that x equals v; it synchronises with the complementary action n v x of noticing that x = v. where (a + b).P is a shorthand for a.P + b.P . This CCS rendering naturally captures the await statement, requiring Process A to wait at Instruction 4 until it can read that readyB = false or turn = A. We use two agent identifiers for each Boolean variable x, one for each value:

The Processes A and B can be modelled as
x true def = asgn true x .
x true + asgn false x .
x false + n true x . x true , x false def = asgn true x .
x true + asgn false x .
x false + n false x . x false .
Likewise we have, for instance, Peterson's mutual exclusion algorithm (PME) is the parallel composition of all these processes, restricting all the communications where L is the set of all names except en A , en B , ln A , ln B , ec A , ec B , lc A and lc B [DGH17].
The LTS of the above CCS expression PME is exactly as displayed in Figure 6. By the interpretation of CCS as an LTSC, defined in Section 4.4, one obtains 2 • m 4 , where I use the names 2 and m 4 of the underlying instructions from Figure 5 to denote the two outgoing τ -transitions from the state A 2 m 4 . In fact, this could have been concluded without studying the above CCS rendering of Peterson's protocol, as Section 4.4 remarks that in the LTSC of CCS the relation • is symmetric, while Section 16 concludes that m 4 • 2 . Thus, the CCS rendering of Peterson's algorithm does not satisfy the correctness criterion ME3 J .

What Happens if Processes Try to Read and Write Simultaneously
A program instruction like 2 , 3 , 4 or 7 that reads or writes a value true, false, A or B from or to a register readyA, readyB or turn cannot be executed instantaneously, and is thus assumed to occur during a interval of real time. Hence it may happen that Processes A and B try to access the same register during overlapping periods of time. In such a case is it common to assume that the register is safe, meaning that A read operation not concurrent with any write operation returns the value written by the latest write operation, provided the last two write operations did not overlap.
This assumption stems from [Lam86], although overlapping writes were not considered there. "No assumption is made about the value obtained by a read that overlaps a write, except that it must obtain one of the possible values of the register." [Lam86] 16 In the same spirit, one may assume that two overlapping writes may put any of its possible values in the register, in the sense that subsequent reads will return that value. I will assume safety in this sense of the Boolean registers readyA, readyB and turn. In an architecture where safe registers are not available, and cannot be simulated, implementing a correct mutual exclusion protocol appears to be hopeless. 17 For the fate of Peterson's algorithm it matters what happens if one process wants to start writing a register when another is busy reading it. There appear to be only three (or five) possibilities.
(1) The register cannot handle a read and a write at the same time; as the read started first, the writing process will need to await the termination of the read action before the write can commence.
(2) The register cannot handle a read and a write at the same time, but the write takes precedence and occurs when scheduled. This aborts the read action, which can restart after the write has terminated. (3) The read and write proceed as scheduled, thus overlapping in time.
A fourth possibility could be that reads and writes are instantaneously after all, so that overlap can be avoided without postponing either. I deem this unrealistic and do not consider this option here. A potential fifth possibility could be a variation of (2), in which the read merely is interrupted, and resumes after the write is finished. In that case, as with option (3), it seems reasonable to assume that the read can return any value of the register.
In Dijkstra's original formulation of the mutual exclusion problem [Dij65], possibility (1) above-atomicity-was assumed-see the quote in Section 9. Lamport, on the other hand, assumes (3) [Lam74]. On his webpage https://lamport.azurewebsites.net/pubs/pubs.html#bakery Lamport takes the position that assuming atomicity "cannot really be said to solve the mutual exclusion problem", as it assumes "lower-level mutual exclusion". As possibility (3) adds the complication of arbitrary register values returned by reads that overlap a write, he implicitly takes the position that solving the mutual exclusion problem under assumption (3) is more challenging; and this is exactly what his bakery algorithm does.
Here I argue that atomicity is the more challenging assumption. The objection that assuming reads and writes to be atomic amounts to assuming "lower-level mutual exclusion" is based on the idea that securing the mutex property of a mutual exclusion protocol is the main challenge. However, the real challenge is doing this in a starvation-free way, and this feature is not inherited from the lower level. By assuming atomicity one obtains 2 • m 4 , that is, transition 2 is affected by m 4 , and consequently Peterson's algorithm fails the correctness requirement ME3 J . In Section 22 below I will argue that this is not merely a result of the way I choose to model things in this paper, but actual evidence of the incorrectness of Peterson's algorithm, or any other mutual exclusion protocol for that matter, provided one consistently assumes atomicity, as well as speed independence.
Assuming (2) instead yields 2 • m 4 , that is, the write 2 is in no way affected by the read m 4 . This means that nothing can prevent Process A form executing 2 . This makes Peterson's algorithm correct, in the sense that it satisfies ME3 J .
Assuming (3) also yields 2 • m 4 . Also this would make the algorithm correct, provided that it is robust against the effects of overlapping reads and writes. For Peterson's algorithm this is not the case; overlapping reads and writes can cause a violation of the mutex property ME2-see Section 21. However, various other mutual exclusion protocols, including the ones of [Lam74] and [Szy88], are robust against the effects of overlapping reads and writes and do satisfy ME3 J when assuming (3).
In CCS, and in Petri nets, • is symmetric, and one has m whenever and m are read or write instructions on the same register, at least when employing the concurrency relation of Section 4.4. This amounts to assuming atomicity.

Is Peterson's Protocol Resistant Against Overlapping Reads and Writes?
Those mutual exclusion protocols that were designed to be robust under overlapping reads and writes, avoid overlapping writes altogether, either by making sure that each variable can be written by only one process (although it can be read by others) [Lam74,Szy88], or by putting writes to the same variable right before [Dij63] 18 or after [Ara11] the critical section, within the part of a process' cycle that is made mutually exclusive. Any protocol that doesn't take this precaution, including Peterson's, is regarded with suspicion by those that make an effort to avoid ill effects due to reads overlapping with writes. Nevertheless, until May 2021 no examples were known (to me at least) that overlapping reads and writes actually cause any problem for Peterson's protocol. I personally believed that it was robust under overlap, reasoning as follows [unpublished notes]. Two overlapping write actions to the same register may produce any value of that register. In Peterson's algorithm, the only register that can be written by both processes contains the variable turn. It is a Boolean register, whose values are A and B. The only write actions to this register are 3 and m 3 . When these overlap in time, any of the register values, that is A or B, may result. However, 3 tries to write the value B, and m 3 the value A. So if the result of a simultaneous write is A, one can just as well assume that 3 occurred before m 3 , and it it is B, that m 3 occurred before 3 . Thus the effects of overlapping writes are no different than those of atomic writes, and hence harmless. Peterson's algorithm has six cases of a read overlapping with a write, and thanks to symmetry it suffices to study three of them. First consider the overlap of the write 2 with the read m 4 . Here the overlapping read can yield any register value, that is, true or false. One should not ignore the possibility that Process B, while cycling around, performs multiple m 4 -reads of readyA that may return any sequence of true and false during a single write action 2 of Process A. However, any read by Process B that returns readyA = true does not help to pass the await-statement in m 4 , and is equivalent to no read of readyA being carried out. So all reads that matter return readyA = false, and can just as well be thought of as occurring prior to 2 . A similar argument applies to the overlap of the write 3 with the read m 4 , and of the write 7 with the read m 4 ; also here any resulting behaviour can already be generated without assuming an overlap. I leave it as a puzzle to the reader to find the fallacy in this argument. A run of Peterson's algorithm that violates the mutex property ME2 was found in 2021 by means of the model checker mCRL2 by Myrthe Spronck [Spr21]. This involved implementing safe registers, as described in Section 20, as mCRL2 processes.

The Impossibility of Mutual Exclusion when Assuming Atomicity and Speed Independence
In this section I argue that when assuming atomicity and a strict form of speed independence, Peterson's algorithm is not correct, in the sense that it fails the requirement of starvation freedom. The argument is that the the run corresponding with the path π P from Section 16 can actually occur. To see why, let me illustrate the form of speed independence needed for this argument by means of a simple example in CCS.
Example 22.1. Consider the process (X|Y )\{c} with X def = a.0 +c.X and Y def = c.d.e.Y , where none of the actions a, d, e is blocked by the environment, that is, the environment is continuously eager to partake in these actions. The question is whether action a is guaranteed to occur.
Here one may argue that when Process X reaches the state a.0 +c.X at a time when its environment, which is the Process Y , is not yet ready to engage in the synchronisation on c, it will proceed by executing a. If, on the other hand, both options a andc are available, it cannot be excluded thatc is chosen, as no priority mechanism is at work here. Now after execution ofc|c, Process X again faces a choice between a andc, but Process Y first has to execute the actions d and e. During the time that Y is busy with d and e, for Process X it feels likec is blocked, and it will do a.
A strict interpretation of speed independence, which I employ here, says that actions d and e may be executed so fast that from the perspective of Process X one can just as well assume that no actions d and e were scheduled at all. Thus the answer to the above question will not change upon replacing Process Y by Y def = c.Y . However, for the process (X|Y )\{c} there is really no reason to assume that a will ever occur.
In CCS and related process algebras this form of speed independence is built in: one sees c and d as τ -transitions, as for the purpose of answering the above question they can be regarded as unobservable, and then applies the law a.τ.P = a.P [Mil90].
A run of Peterson's protocol, or any implementation thereof in a setting where atomicity and the above form of speed independence may be assumed, can visit the state A 2 m 4 in which Process A wants to write on register readyA, and that register has a choice between being written or being read first, by Process B. One cannot exclude that the read action wins this race, which allows Process B to enter the critical section. During the execution of m 4 , reading readyA, Process A has to wait. Afterwards, Process B might execute actions m 5 -m 3 so fast that from the perspective of Process A no time elapses at all. This brings us again in State A 2 m 4 where the same race between 2 and m 4 occurs. Again it could be won by m 4 . This behaviour can continue indefinitely.
The above argument stems from [Gla18], where it was made not just for Peterson's protocol, but for all conceivable mutual exclusion algorithms. These include Lamport's bakery algorithm [Lam74], Szymański's algorithm [Szy88], and the round-robin scheduler. To obtain a correct mutual exclusion algorithm, one has to either employ register hardware in which the assumption of atomicity-possibility (1) in Section 20-is not valid, or make the protocol speed-dependent. The first option was already explored in Section 20; as mentioned there, using hardware that works according to possibilities (2) or (3) solves the problem. The second option will be explored in Part IV of this paper.

Variations of Petri nets and CCS with Non-blocking Reading
To escape from the failure of requirement ME3 J for Peterson's protocol due to the assumption (1) of atomicity as well as speed independence, one can instead assume possibility (2) from Section 20 while keeping speed independence. I refer to such an option as non-blocking reading [CDV09], as a write cannot be postponed by a read action on the same register. This yields 2 • m 4 , thereby saving ME3 J . Here I review how this can be modelled in variations of Petri nets and CCS. An element (s, t) of the multiset R is called a read arc. Read arcs are drawn as lines without arrow heads. For t∈T , the multisett : S → N is given byt(s)=R(s, t) for all s∈S. Transition Thus, for a transition to be enabled, there needs to be a token at the other end of each of its read arcs. However, these tokens are not consumed by the firing. Clearly, the transition relation t −→ N between the markings of a net is unaffected when replacing each read arc (s, t) by a loop between s and t; that is, by dropping R and using the flow relation F R := F + R + R −1 . This does not apply to the concurrency relation between transitions.
The definition of a structural conflict net can be extended to Petri nets with read arcs by requiring, for all t, u ∈ T and all reachable markings M , that if For such nets, one defines t • u iff ( • t +t ) ∩ • u = ∅. This says that a transition u affects a transition t iff u consumes a token that is needed to enable t. The condition for a structural conflict net guarantees exactly that if t • u and u is enabled under a reachable marking M , then t is not enabled under the marking M − • u.
As pointed out by Vogler in [Vog02], the addition of read arcs makes Petri nets sufficiently expressive to model mutual exclusion. When changing the loops readyA = false m A 4 and readyB = false B 4 in Figure 7 into read arcs, one obtains 2 • m B 4 and m 2 • A 4 . So the resulting net satisfies ME1-6 with CC = J, and correctly solves the mutual exclusion problem. Two different solutions as Petri nets with read arcs are given in [Vog02, Figures 8  and 9], the one in Figure 9 being a round-robin scheduler.

Broadcast Communication.
A process algebraic solution was presented in [GH15a, Section 5], using an extension ABC of CCS with broadcast communication. The most obvious distinction between broadcast communication and CCS-style handshaking communication is that the former allows multiple recipients of a message and the latter exactly one. This feature of broadcast communication was not exploited in the solution of [GH15a]. A more subtle feature of broadcast communication is that the transmission of a broadcast occurs regardless of whether anyone is listening. Thus a broadcast can be used to model a write that cannot be blocked or postponed because the receiving register is busy being read. This yields an asymmetric concurrency relation, in which a broadcast transition is not affected by a competing transition from a receiver of the broadcast, whereas the competing transition is affected by the broadcast.
Nevertheless, the semantics of ABC says that the broadcast will be received by all processes that are in a state with an outgoing receive transition. This allows one to make receipt of a broadcast reliable, by giving the receiving process an outgoing receive action in each of its states. This feature of the semantics of ABC, which is essential for modelling mutual exclusion, is somewhat debatable, as one could argue that a process that is engaged in its own broadcast transmission through a transition between two states that each have a outgoing receive transition, is temporary too busy to hear an incoming message.
The solution proposed in [GH15a] is not a mutual exclusion protocol, but a fair scheduler, which can be converted into a mutual exclusion protocol in the manner of Section 10. In fact, it is a variant of the encapsulated gatekeeper, where broadcast communication is used to merely assume justness for all requirements. In the same spirit one can model Peterson's protocol in ABC in such a way that ME3 J is satisfied. It suffers to interpret asgn v x and asgn v x as broadcast transmit and receive actions. The LTSC semantics of ABC [Gla19a] then yields 2 • m 4 . 23.3. Signals. A different process algebraic solution, arguably less debatable, was proposed in [CDV09,DGH17]. In [CDV09], processes P are equipped with the possibility to perform actions that do not change their state, and that, in synchronisation with another parallel process Q, describe information on the state of P that is read by Q in a non-blocking way. In [DGH17], following [Ber88], such actions are called signals. The communication between, say, a traffic light, emitting the signal red, and a car, coming to a halt, is binary, just like handshaking communication in CCS. The difference is that the concurrency relation between transitions again becomes asymmetric, because the car is affected by the traffic light, but the traffic light is not affected by the car. A car stopping for a red light in no way blocks or postpones the action of the traffic light of turning green.
In [Ber88,DGH17] the emission of a signal is modelled as a predicate on states, whereas the receipt of such an emission is modelled as a transition. In [CDV09,Bou18] the emission of a signal is modelled as a transition instead. An advantage of the former approach is that it stresses the semantic difference between signal emissions and handshaking actions, and emphasises that signal emissions cannot possibly cause a state-change. An advantage of the latter approach is that communication via signals can be treated in the same way as a CCS handshaking communication, thereby simplifying the process algebra. Technically, the two approaches are equivalent. In CCS with signals [DGH17], modelling signal emissions as transitions [Bou18,Gla19a], a Boolean variable x, such as readyA in Peterson's protocol, has the exact same LTS as in the CCS model of Section 19: However, this time the notifications n v x are signals emissions rather than handshaking actions. The definition of the concurrency relation • on CCS transitions from Section 4.4 is in [Gla19a] extended to CCS with signals in such a way that with the above way of modelling variables, and the same processes A and B as in Section 19, one obtains 2 • m 4 . Here the action m 4 of reading the register is like a car reading a traffic light, and does not inhibit the write action 2 on the same register.
This shows that Peterson's algorithm can be correctly modelled in CCS with signals. Earlier, Dekker's algorithm was correctly modelled in the process algebra PAFAS with nonblocking reading [CDV09], and same was done for Peterson's algorithm in [BCC + 11]. The latter paper also points out that non-blocking reading is not strong enough an assumption to obtain starvation-freedom, or even deadlock-freedom as defined in Section 9, for Knuth's algorithm [Knu66]; this requires a fairness assumption. 23.4. Modelling Non-blocking Reading in CCS. The rendering in [DGH17] of Peterson's protocol employs an extension of CCS with signals that arguably strictly increases the expressiveness of the language. Namely in CCS with signals one obtains an asymmetric concurrency relation • , which turned out to be essential for the satisfactory modelling of Peterson; restricted to proper CCS this relation is symmetric. Bouwman [Bou18] proposes the same modelling of Peterson's protocol, but entirely within the confines of the existing language CCS, namely by simply declaring some of the action names of CCS to be signals. As it is essential that the emission of a signal never causes a state-change, care has to be taken to only use CCS-expressions in which the actions that are chosen to be seen as signal emissions occur in self-loops only. When doing this, creating a satisfactory rendering of Peterson's protocol within CCS is unproblematic [Bou18]. Neither [DGH17] nor [Bou18] mention the concurrency relation • at all, and use a coinductive definition of justness instead. However, as shown in [Gla19a], the concept of justness common to [DGH17] and [Bou18] can equivalently be obtained as in Section 5 from an asymmetric concurrency relation • . Thus, by declaring certain CCS actions to be signals, Bouwman effectively changes the concurrency relation between CCS transitions labelled with those actions.
In [GH15b] it was stated that fair schedulers and mutual exclusion protocols cannot be rendered correctly in CCS without imposing a fairness assumption. In making that statement, the (symmetric) concurrency relation between CCS transitions defined in Section 4.4-or equivalently, the resulting notion of justness-was seen as an integral part of the semantics of CCS. In the early days of CCS, a lot of work has been done in formalising notions of concurrency for CCS and and related process algebras [NPW81, GM84, BC87, Win87, GV87, DDM87, Old87]. All that work is consistent with the concurrency relation defined in Section 4.4. Changing the concurrency relation, as implicitly proposed by Bouwman, alters the language CCS as seen from the perspective of [NPW81, GM84, BC87, Win87, GV87, DDM87, Old87]. However, it is entirely consistent with the interleaving semantics of CCS given by Milner [Mil90]. 23.5. Modelling and Verification of Peterson's Algorithm with mCRL2. ACP [BW90,Fok00] and mCRL2 [GM14] are CCS-like process algebras that fall under the scope of the impossibility result of [GH15b]. That is, when defining a concurrency relation on the transitions of ACP or mCRL2 processes in the traditional way, consistent with the viewpoint of [NPW81, GM84, BC87, Win87, GV87, DDM87, Old87], and defining justness as in Section 5 in terms of this concurrency relation, Peterson's algorithm cannot be correctly rendered in ACP or mCRL2 when merely assuming justness, i.e., without resorting to a fairness assumption. Nevertheless, by the argument of [Bou18], Peterson's algorithm can be correctly rendered in these formalisms under the assumption of justness when using an alternative concurrency relation, one obtained by treating certain actions as signals.
Using this insight, Bouwman, Luttik and Willemse [BLW20] render Peterson's algorithm in an instance of ACP or mCRL2 that uses much more informative actions. This could be done without adapting those languages in any way, simply by choosing appropriate actions. Each action labelling a transition contains additional information on the components of the system from which this transition stems. This information is preserved under synchronisation, when a transition of a parallel composition is built from transitions of each of the components. The latter requires the general communication format of ACP or mCRL2, as in CCS synchronisation merely result in τ -transitions. A price paid for this approach is that the resulting LTSs have much fewer τ -transitions, so that there are fewer opportunities for state-space reduction by abstraction from internal activity.
In Section 7 I showed how B-justness, and thereby the correctness criteria for mutual exclusion protocols, can be expressed in standard LTL enriched with a number of atomic propositions, such as en t and t. Bouwman, Luttik and Willemse [BLW20] show not only that the same can de done in the modal µ-calculus, but also that all the necessary atomic propositions can be expressed in terms of the carefully chosen actions that are used in modelling the protocol. This made it possible to model the protocol in such way that all correctness requirements can be checked with the existing mCRL2 toolset [BGK + 19].

Part IV. A Speed-dependent Rendering of Peterson's Protocol
In Part III I showed that when assuming atomicity and speed independence, Peterson's algorithm does not have the correctness property ME3 J -and in [Gla18] I argued that the same can be said for any other mutual exclusion protocol. That is, it satisfies starvationfreedom only under a weak fairness assumption, which in my opinion is not warrantedif it was, even the encapsulated gatekeeper of Section 15 would be an acceptable mutual exclusion protocol.
Thus, to obtain a correct version of Peterson's algorithm (or mutual exclusion in general) one has to either drop the assumption of atomicity, or speed-independence. The former possibility has been elaborated in Part III; Section 16 showed then when assuming nonblocking reading (option (2) from Section 20, resulting the verdict 2 • m 4 ) instead of atomicity, Peterson's algorithm is entirely correct. Section 23 recalled how to model this with process algebra or Petri nets.
The latter possibility will be elaborated here. I present a speed-dependent incarnation of Peterson's protocol that satisfies all correctness requirements, even under the assumption of atomicity. Moreover, in the model of Section 26, requirement ME3 J can be strengthened into ME3 Pr , thereby attaining the best quality criteria in the hierarchy of Figure 3. As pointed out in Part III, this is not possible when keeping speed independence, even when dropping atomicity.
The idea is extremely simply. As explained in Section 16, all that is needed to obtain starvation-freedom is the certainty that when Process A reaches the state 2 , it will in fact execute the instruction 2 . The only thing that can stop Process A in state 2 from executing 2 , is the register readyA being too busy being read by Process B to find time for being written by Process A. Now assume that there exists an amount t 0 of time, such that, if for a period of at least t 0 , Process A is in state 2 and the register readyA is available, in the sense that it is not being read by Process B, then in that period the write action 2 will commence. Now further assume that Process B will spend a period of at least t 0 in its critical or in its noncritical section. Then Process A will have enough time to perform the action readyA := true, and starvation-freedom is ensured. This is the speed-dependent version of Peterson's protocol I propose. In fact I cannot exclude that mutual exclusion protocols work in practice exactly because timing constraints such as sketched above are always met.
The above solution is sufficiently clear not to need mathematical proof. Nevertheless I proceed with an implementation of the above idea in process algebra. The goal of this is mostly to see which process algebra we need to formalise time-dependent reasoning such as performed above. Naturally a timed process algebra that associates real numbers to various passages of time would be entirely equipped for this task. However, I will show that the idea can already be formalised in a realm of untimed process algebra, in the sense that the progress of time is not quantified.

CCS with Time-outs
Following [Gla21,Gla20a], my process algebra will be CCS t , which is CCS, as presented in Section 4.3, but with α ranging over Act := A .
∪ {τ, t}, with t a fresh time-out action. Relabellings f extend to this extended set of actions Act by f (t) := t. The interpretation of this language as an LTSC proceeds exactly as in Section 4.4.
All actions α ∈ Act are assumed to occur instantaneously. The time-out action t models the end of a time-consuming activity from which we abstract. When a system arrives in a state P , and at that time X is the set of actions allowed (= not blocked) by the environment, there are two possibilities. If P has an outgoing transition t with (t) ∈ X ∪ {τ }, the system immediately takes one of the outgoing transitions t with (t) ∈ X ∪ {τ }, without spending any time in state P . The choice between these transitions is entirely nondeterministic. The system cannot immediately take a transition t with (t) ∈ A\X, because the action (t) is blocked by the environment. Neither can it immediately take a transition t with (t) = t, because such transitions model the end of an activity with a finite but positive duration that started when reaching state P .
In case P has no outgoing transition t with (t) ∈ X ∪ {τ }, the system idles in state P for a positive amount of time. This idling can end in two possible ways. Either one of the time-out transitions P t −→ Q occurs, or the environment spontaneously changes the set of actions it allows into a different set Y with the property that P a −→ Q for some a ∈ Y . In the latter case a transition t with src(t) = P and a ∈ Y occurs. The choice between the various ways to end a period of idling is entirely nondeterministic. It is possible to stay forever in state P only if there are no outgoing time-out transitions.
A fundamental law describing the interaction between τ -and t-transitions, motivated by the above, is τ.P + t.Q = τ.P . It says that when faced with a choice between a τ -and a t-transition, a system will never take the t-transition. I could have devised an operational semantics of CCS t , featuring negative premises, that suppresses the generation of transitions R t , C − −−− → Q when there is a transition R τ , D − −−− → P . However, following [Gla21,Gla20a], I take a different, and simpler, approach. The operational semantics of CCS t is exactly like the one of CCS, and generates such spurious transitions R t , C − −−− → Q; instead, its semantics assures that these transitions are never taken. In [Gla20a] a branching time semantics is proposed, and in [Gla21] a linear-time semantics-the closest approximation of partial trace semantics [Gla01] that yields a congruence for the operators of CCS t . Both these semantics satisfy τ.P + t.Q = τ.P .
Here I achieve the same by calling a path potentially complete when it features no transitions R t , C − −−− → Q when there also exists a transition R τ , D − −−− → P . A completeness criterion now should set apart a subset of the potentially complete paths as being complete. So all paths containing spurious transitions R t , C − −−− → Q count as incomplete, and hence do not contribute to the evaluation of judgements in reactive temporal logic. In depictions of LTSs for fragments of CCS t I will display the spurious transitions dotted, to emphasise that they can not be taken.
A transition R t , C − −−− → Q also cannot be taken when there is an alternative R a , D − −−− → P , with a an action that surely will not be blocked by the environment when the system is in state R. Thus, whether or not a transition is spurious depends on the mood of the environment at the time this transition is enabled. This dependency is encoded in the semantic equivalences of [Gla21] and [Gla20a]. Given this, it was no extra effort to simultaneously inhibit the selection of transitions that are spurious in any environment.

Spurious Transitions and Completeness Criteria for LTSs with Time-outs
In LTSs with t-transitions, it makes sense to allow judgements P |= CC B,E ϕ with B ⊆ E ⊆ A, where A is the set of all actions except τ and t. Here B is the set of actions that can be permanently blocked by the environment, and E the ones that can be blocked for finite periods of time. My interest is in the cases CC = Pr and CC = J. When taking as default an environment that may block or allow any action a ∈ A, the annotations B and E rule out those environments in which an action from A\B is blocked permanently, or an action from A\E is blocked temporary.
Definition 25.1. A transition t is E-spurious if (t) = t and there exists a transition u ∈ Tr with src(u) = src(t) and (u) ∈ (A\E) ∪ {τ }. It is spurious iff it is A-spurious.
Note that t is spurious iff it is E-spurious for all E. This is the case iff it is a t-transition sharing its source state with a τ -transition. When actions from A\E cannot be blocked by the environment, not even temporary, E-spurious transitions can not be taken.
Definition 25.2. A path π is potentially E-complete if it contains no E-spurious transitions. It is B, E-progressing if it (a) is potentially E-complete, and (b) is either infinite or ends in a state of which all outgoing transitions have a label from B. It is B, E-just if (a) it is potentially E-complete, and (b) for each t ∈ Tr with (t) / ∈ B and its source state s := src(t) occurring on π, any suffix of π starting at s contains a transition u with t • u.
For a finite path to be complete, its last state may have outgoing transitions with labels from B only, for a run comes to an end only when all subsequent activity is permanently blocked by the environment. In the absence of t-transitions, judgements s |= CC B,E ϕ are independent of E, and agree with the ones defined in Part I of this paper.
In the context of the present paper, when describing properties for a given or desired process P , I see no reason to combine judgements P |= CC B,E ϕ with different values of E. This suggests writing P |= CC B,E ϕ as (P, E) |= CC B ϕ. This way, the quality criteria of Sections 11 and 12 can remain unchanged, and apply to systems (P, E). Here P is a hypothetical fair scheduler or mutual exclusion protocol, and E the set of its actions that can be temporary blocked by the environment.
To gauge the influence of the environment on the visible actions en i , ln i , ec i and lc i of a mutual exclusion protocol, one can see the processes i that compete for the critical section as clients that communicate with the protocol through synchronisation on these actions. As explained in Section 12, the actions ln i belong in B (except when formulating requirement ME6) because ln i is permanently blocked in case Client i chooses not to leave its noncritical section again. The actions lc i belong in E, but not in B, because the client may need some time before leaving its critical section, but is assumed to do this eventually. As mentioned in Section 12, the actions ec i and en i do not belong in B, for we assume the client to eventually enter the (non)critical section when allowed by the protocol. There is a choice between putting these actions in E or not. Putting them in E models that the client   Figure 8: Speed dependent LTS of Peterson's mutual exclusion algorithm may delay a while before entering the critical section when allowed, whereas putting them in A\E models that when the protocol for Client i is in its entry or exit section, the actual client will patiently wait until granted access to the (non)critical section, and take advantage of this opportunity as soon as it arises. Taking E = E l := {ln i , lc i | i = 1, . . . , N } appears most natural, but taking E = A = {ln i , ec i , lc i , en i | i = 1, . . . , N } is a reasonable alternative. The latter leads to stronger judgements, in the sense that when a protocol P is correct when taking E := A, it is surely correct when taking E := E l . To model both options at the same time, in Figure 8 I will draw E l -spurious transitions dashed. Those transitions cannot be taken when choosing E := E l , but they can be taken when choosing E := A.

Modelling Peterson's Protocol in CCS with Timeouts
My model of Peterson's algorithm in CCS t differs from the one from Section 19 in only one way: a t-action is inserted between ec i and lc i for i = A, B, in the two processes A and B.
Thus A def = ln A . asgn true readyA . asgn B turn . (n false readyB + n A turn ) . ec A . t . lc A . asgn false readyA . en A . A. This models that a process spends a positive but finite amount of time in its critical section. The LTS of the resulting CCS t rendering of Peterson's protocol is displayed in Figure 8. Exactly as in Section 16/19, it follows that this model satisfies the requirements ME1, ME2, ME4 Pr , ME5 J and ME6 J . Additionally, it satisfies ME3 Pr , as follows immediately from the LTS.
The same result would be obtained by letting time pass in the noncritical section, instead of, or in addition to, the critical section. It can argued that it is not realistic to assume that assignments like 2 and 3 occur instantaneously. However, this part of the modelling in CCS t is merely an abstraction, and can be taken to mean that the time needed to execute such an assignment is significantly smaller than the time a process spends in its critical and/or noncritical section. Using CCS t , one can also make a model in which time is spend between each two instructions. In such a rendering one would obtain ME3 J , thus needing justness for starvation-freedom.