Artificial Intelligence In Medicine

combine their specifications in the context of the monitoring task. In our previous work (Alman et al., 20222), we presented a conceptual framework for handling the above cases in the context of monitoring. In this paper, we present the algorithms necessary for implementing key components of this conceptual framework. More specifically, we provide formal languages for representing clinical guideline specifications and formalize a solution for monitoring the interplay of such specifications expressed as a combination of (data-aware) Petri nets and temporal logic rules. The proposed solution seamlessly handles combination of the input process specifications and provides both early conflict detection and decision support during process execution. We also discuss a proof-of-concept implementation of our approach and present the results of extensive scalability experiments.


Introduction
Evidence-based medicine has led to the definition of a number of clinical practice guidelines, containing systematic recommendations on how to handle patient care.The aim of such guidelines is to improve the quality of care, to mitigate unjustified deviations, and to reduce costs.The further transition from clinical practice to computerinterpretable guidelines (CIGs) has paved the way towards decision support systems aiding healthcare and administrative professionals in the modeling, execution, analysis, and continuous improvement of clinical guidelines [47].
Clinical practice guidelines and their computer-interpretable counterparts are defined by assuming an ideal execution context [8].This essentially means that a guideline has unlimited provision of resources 1. integrating multiple CIGs based on the specific medical condition of the patient, especially in the light of their co-morbidities; 2. considering the interplay between such CIGs and background medical knowledge; 3. continuously adapting and personalizing the resulting process depending on the current circumstances, that is, on the history of executed activities, the effects and data they have produced, and other relevant events.
The net effect of this complexity is that, while treating the patient, healthcare providers may deviate from the courses of execution prescribed by the guideline(s).Deviations may arise for a wide range of different reasons, from human mistakes to deliberate choices that depart from the pre-defined execution paths to save patient's life.Dedicated techniques have been consequently developed within medical informatics to compare the execution trace and electronic patient record of a specific patient, with one or more corresponding reference CIGs.As surveyed in [47], these techniques fall under the name of compliance checking or critiquing, depending on the input knowledge they consider and how they use such knowledge to elaborate on the meaning of and reasons behind the detected deviations.Interestingly, these notions incarnate, in the clinical setting, one of the main general analytic tasks of process mining, namely conformance checking [13].Conformance checking is the task of comparing the expected behavior captured in a reference process model with the event data tracing actual instances of that process, and extract (fine-grained) insights from the detected deviations.Monitoring techniques for conformance checking at runtime, tracking ongoing process executions and detecting deviations as soon as possible, have also been extensively investigated [38].
As pointed out in [47], the vast majority of approaches for compliance checking and critiquing present two main shortcomings: 1. compliance is evaluated on a single CIG, and even when multiple CIGs at once are considered, their interplay with background medical knowledge is not tackled at all; 2. compliance is evaluated a-posteriori, and the adopted techniques do not lend themselves to be used at runtime, as they cannot handle the inherent incompleteness of an ongoing, evolving execution.
In this work, we tackle these two open challenges, bringing forward a comprehensive formal framework for monitoring multiple process specifications and anticipatory detection of deviations.The framework comes with two distinctive features related to specification and monitoring.
As for specification, we adopt and further develop the M3 framework previously introduced in [2].The M3 framework brings forward a formal multi-model approach where a process specification emerges from the combination of procedural and declarative process components, which can be respectively used to express CIGs/medical procedures and background medical knowledge.The same process instance concurrently goes through the various components at the same level of abstraction, calling for handling their mutual interplay; this results in so-called loosely coupled hybrid models [3].Procedural components are represented using a data-aware extension of Petri nets [35,43] where transitions are associated to guards that check and manipulate numerical data variables using variable-to-constant comparisons.Declarative rules are specified using a corresponding data-aware extension of linear temporal logic over finite traces [18] that forms a proper fragment of the logics studied in [12,26,27].Within this logic, of particular interest are patterns expressed in a fragment of data-aware extensions [11,20,39] of the Declare declarative process specification language [44,50].The focus on finite traces is motivated by the fact that process instances are expected to terminate, possibly executing unboundedly many steps that eventually lead the instance to one among alternative final states.
This approach allows us to handle the interplay of different components considering both the control-flow and data dimensions, thus supporting interaction schemes like those elicited in [8][9][10] for the medical domain.An example: if a CIG of interest expects that at least 500mg of paracetamol should be administered to a patient, and background medical knowledge indicates that for that patient 1000mg is the maximum quantity, this results in an admissible interval 500-1000mg.Another example: if a CIG for handling bacterial pneumonia prescribes two alternative treatments, one based on macrolid and another on penicillin, and the patient is found to be allergic to penicillin, then the only treatment that conforms with the CIG and with background medical knowledge is that of macrolid.More in general, we show how simultaneously accounting for models from different modeling paradigms allows us to capture sophisticated forms of scoping and interaction among such models, going beyond what is captured so far in the literature on process mining and providing a formal characterization of forms of hybrid processes mixing CIGs and background medical knowledge, such as those in [4,9,10,51,52].
As for monitoring, we resort to methods from runtime verification [36], considering in particular the construction of automata-based monitors for finite traces, extending [16] to account for hybrid models with data variables and variable-to-constant comparisons.The adoption of a formal approach to monitoring is not only useful as it guarantees the construction of correct-by-design monitors, but also because it goes far beyond the mere consideration of execution prefixes and the resulting state of affairs.In particular, we can combine the execution prefix indicating what happened so far within a process instance (e.g., the sequence of activities to which a patient has been subject so far) with speculative reasoning on the possible (infinitely many) future continuations of the instance.This is essential to detect, at the earliest moment possible, violations that cannot be directly ascribed as deviations w.r.t. a single procedural component or declarative rule, but instead emerge as a conflict among different components considering their mutual interaction given the current execution state.A state of conflict indicates that, while currently none of the considered components is permanently violated, every continuation will inevitably violate at least one of them.
Consider, for example, a CIG that, given the current trace prefix of a patient treatment process, expects the execution to continue through a long sequence of activities, culminating in one where penicillin is administered to the patient.If the patient is currently found to be allergic to penicillin, the monitor should immediately report the presence of a conflict.This aspect has been already extensively studied in the case of declarative process specifications, [16,26,40,42], but never considering the hybrid, multi-model setting studied in this paper.This form of early detection of violations is of particular interest in the medical domain, as conflicts between different CIGs or between CIGs and background medical knowledge can often occur and should in fact be explicitly reported to healthcare professionals, who in turn have responsibility of handling their resolution -which cannot be hardwired upfront [8,9].
Technically, we substantiate this multi-model specification and monitoring framework through three novel contributions, which together provide a complete formalization of the approach outlined in [2]: 1. First and foremost, we tackle the infinity induced by the presence of data, which, in general, leads to undecidability of monitoring [12].Specifically, we recast in our setting faithful data abstraction techniques for variable-to-constant comparisons, originally developed towards verification of data Petri nets [19,35].This allows us to obtain finitely representable monitors based on traditional finite-state automata.2. Second, we construct homogeneous monitors for declarative and procedural components, and define how to combine them into a unique, global monitor for conflict detection; this is obtained by computing a form of automata cross product, which conceptually describes the execution traces of a hybrid specification where all procedural and declarative components are simultaneously applied.This corresponds to the concurrent execution of all Petri nets contained in the specification, while at the same time checking the actual and possible satisfaction of constraints.3. Third, we further refine the global monitor by acknowledging that monitoring should continue to provide meaningful feedback even after a violation has been detected.Specifically, we ensure that when the global monitor returns a permanent violation (due to an explicit violation of a process specification, or the presence of a conflict), the monitor continues to operate, and can distinguish which continuations may lead to incur in additional violations.We associate a violation cost to each component, equipping the global monitor with the ability of returning the best-possible next events, i.e., events that would lead to the minimum total violation cost that can be obtained based on the events executed thus far.
The presented monitoring approach has been implemented in a proofof-concept tool, which is used for extensive scalability experiments using a scenario from the medical domain.The experiments cover input specifications of different complexity, expressiveness, and size, showing the feasibility of the approach, but also highlighting its current limitations.
The remainder of this paper is structured as follows.Section 2 provides an example scenario from the medical domain.Section 3 briefly describes the M3 Framework.Section 4 and 5 introduce the formal definitions and algorithmic solutions underlying the proposed monitoring approach.Section 6 presents the evaluation of the proposed monitoring approach.Section 7 discusses the related work and Section 8 concludes the paper.

Example scenario
Let us now introduce a simple scenario, taken from the medical domain, that demonstrates two clinical guidelines together with one background knowledge constraint.Both guidelines and the constraint are adopted from the scenario reported in [51].We use this scenario to showcase when comorbidity issues may arise during the treatment process and how the (monitoring) approach proposed in this paper can help healthcare providers to address such issues.The same guidelines are discussed in more detail in [4] and originally provided by the British National Institute for Health and Care Excellence (www.nice.org.uk). 1  Scenario.A patient shows up at the emergency department with an acute stomach pain and immediately gets assigned to one of the emergency physicians.After having looked through the patient's medical history and a brief examination, the doctor concludes that the patient experiences a peptic ulcer (PU) relapse and chooses to follow the standard PU treatment clinical guideline (CG).The standard procedure provided for this CG envisions that, first, a helicobacter pylori test ( ) is performed, and then, based on its outcome, either prescribes amoxicillin administration ( ), if the test is positive, or gastric acidity reduction () otherwise.Afterwards, a peptic ulcer evaluation exam (  ) is performed so as to estimate the effects of the therapy.
After having performed the helicobacter test, the doctor prescribes the amoxicillin-based treatment.However, the doctor does not know that the patient also suffers from venous thromboembolism (VT) and has been already followed by a cardiologist from the same hospital.
In acute phases, VT requires an immediate intervention decision (), chosen among three different possibilities based on the situation of the specific patient.Mechanical intervention () uses devices 1 While [4,51] do not cite specific guidelines, based on our research the relevant guidelines are CG184, NG158, and TA287, with the latter highlighting Warfarin as a recommended anticoagulation treatment for venous thromboembolism.
that prevent the proximal propagation or embolization of the thrombus into the pulmonary circulation, or involves the removal of the thrombus.The other possibilities are an anticoagulant therapy based on warfarin (  ), or a thrombolytic therapy (  ).To help the patient coping with the acute phase of VT, the cardiologist had prescribed the warfarin therapy that the patient was already following at the moment when the test was performed.
It is known that any interaction between amoxicillin therapy (in the PU procedure) and warfarin therapy (in the VT procedure) is usually avoided in medical practice, since amoxicillin increases the anticoagulant effect of warfarin, raising the risk of bleedings.Therefore, in cases where the PU and VT procedures are performed simultaneously, it would be important to alert healthcare providers simultaneously following one patient with multiple conditions about the impossibility of administering warfarin and amoxicillin together (that is, activities  and   cannot coexist in the same treatment case).This constraint (we call it C) is an example of basic medical knowledge [9] forming an additional process specification in the given scenario.
As we have mentioned above, the patient was tested positive on helicobacter pylori while undergoing the warfarin therapy.Using constraint C, even without knowing about other therapies, the doctor can be alerted by the hospital medical workflow management system about a conflict arising between two CGs and C. For this outlier, but possible situation the doctor is offered three alternatives to proceed: 1. Violating PU (by skipping the amoxicillin therapy); 2. Violating VT (by using an alternative anticoagulant); 3. Violating C (giving priority to the two procedures).
After have been informed about the presence of the conflict, the emergency physician can assess the patient's case with respect to the other treatment in progress, weigh the implications of one choice over the others, and finally make an informed decision.Since one should avoid by any means complications such as serious bleeding (which could happen if C is ignored), and given that skipping the amoxicillin therapy is rather costly, due the lack of viable alternatives for treating peptic ulcer in case of helicobacter pylori, the doctor chooses to violate the VT procedure as there are other anticoagulants (e.g., heparin) that may be less effective but do not interact strongly with amoxicillin.
The above reasoning can be achieved by assigning violation costs to the CG specifications and background knowledge constraints.Like that, the hospital workflow management system can use this information to provide a more apt solution.For example, the option to violate C should come with the highest cost (among the other process specifications).
Notice that we can also deal with (meta-)constraints [15] that impose conditions on the process execution depending on the truth value of other constraints.As an example, we will specify a metaconstraint dictating that if constraint C gets violated, then the patient must be placed under heightened observation, which is represented as requiring an additional activity .

M3 framework
In our previous work [2], we have introduced the Multi-Model Monitoring Framework (M3 Framework) to address scenarios like the one described in Section 2. In this section we give a brief overview of this framework, thus setting the stage for the monitoring approach presented in Sections 4 and 5.For more details on the framework itself we refer the reader to [2].
The M3 framework is organized in phases which allow for eliciting, managing, and monitoring hybrid process specifications.It supports scenarios with multiple procedural and declarative specifications, where the former can be executed concurrently and the latter work as global constraints that implicitly induce additional dependencies between the procedural specifications.Both declarative and procedural specifications are stored in a specification repository (see Fig. 1), which is constantly updated during the model elicitation phase, and then used for monitoring individual cases in the case-specific preparation and monitoring phases.
Elicitation Phase.The elicitation phase of the M3 Framework is envisioned as a continuous case-agnostic phase, during which domain knowledge (standard procedures, classifiers, etc.) and organizational context (business priorities, available resources, etc.) are transformed into concrete process specifications, which are then stored in a dedicated specification repository.Both declarative and procedural specifications are supported, and multiple specifications of either paradigm can be used to represent a single global process, thus supporting not only hybrid process specifications, but also, for example, componentbased and aspect-oriented modeling approaches [34].Full agreement between the individual process specifications is not assumed, instead each specification is associated with a priority value that is used during monitoring to provide guidance to the user in resolving any potential conflicts.Each specification is also associated with a priority value that is later used in the monitoring phase to handle conflicts.Concrete approaches for handling the elicitation phase are out of the scope of this paper.
Preparation Phase.The preparation phase of the M3 Framework is envisioned as a case-specific non-recurrent phase in which an incoming case is assessed, relevant process specifications are selected, and a corresponding hybrid specification is automatically created as an input for the following monitoring phase.This hybrid specification will encompass the combined behavior of all selected specifications, the corresponding specification priorities, and, if required, also additional case-specific modifications.The selected process specifications can be smaller fragments of a single business process, but also fragments or full specifications of multiple, separately defined (but concurrently executed) business processes.The main functionalities of this phase are formalized in Section 5 and evaluated in Section 6.
Monitoring Phase.The monitoring phase of the M3 Framework is envisioned as an ongoing case-specific phase, covering the entire duration of the case being monitored.During this phase, the state of the monitor and the set of next recommended actions (with payloads) are updated after the occurrence of each event, therefore providing guidance towards the successful completion of the case.The monitoring phase is also envisioned to allow for optional modifications of the current hybrid process specification so as to handle emerging case characteristics that were unforeseen during the preparation phase.However the implementation of this functionality is currently left for future work.Similarly to the previous phase, the main functionalities of this phase are formalized in Section 5 and evaluated in Section 6 (including concrete examples of the output of this phase).

Process components
In this section, we define the models used to specify declarative and procedural data-aware process components based on Multi Perspective-Declare (MP-Declare) [11,20,39]) and data Petri nets (DPNs [35,43]) respectively.
We start by fixing some preliminary notions related to events and traces.An event signature is a tuple ⟨, ⟩, where  is the activity name and  = { 1 , … ,   } is the set of event attribute (names).We assume a finite set  of event signatures, each having a distinct name (thus we can simply refer to an event signature by its name).With   = ⋃ ⟨,⟩∈  and   = ⋃ ⟨,⟩∈  we respectively denote the sets of all event and attribute names from .
An event of signature ⟨, ⟩ is a pair  = ⟨, ⟩ where  ∶  ↦ R is a total value assignment function.For simplicity, we assume attributes ranging over reals equipped with comparison predicates (simpler types such as strings with equality and booleans can be seamlessly encoded).We call a finite sequence  =  1 ⋯   of events a trace.Given a trace , || denotes its length and (), for 1 ≤  ≤ ||, denotes   .Multi-Perspective Declare with Local Conditions.For declarative process components, we rely on a multi-perspective variant of the well-known process modeling language Declare [50].A Declare model consists of template-based constraints that must be satisfied throughout the process execution.The template syntax and semantics are formalized using Linear Temporal Logic over finite traces (LTL  ) [44].Definition 1.An LMP-Declare constraint is an expression of the form: Formulas of the form  ⊙  and  are called atomic.We define the usual abbreviations: Notice that the language of boolean combinations of attribute-toconstant comparisons without event variables closely resembles that of variable-to-constant conditions in [35], thus providing a good basis for combining declarative constraints with procedural models expressed with DPNs.As in standard LTL  ,  denotes the strong next operator (which requires the existence of a next state where the inner formula holds), while  stands for strong until (which requires the right-hand formula to eventually hold, forcing the left-hand formula to hold in all intermediate states).
We inductively define when an LMP-Declare constraint  is satisfied by a trace  at position 1 ≤  ≤ ||, written ,  ⊧ , as follows: • ,  ⊧  iff () ⊧ ; captures that whenever event a occurs then b cannot later occur with its attribute  carrying a value greater than 10.

Data Petri nets.
We define data Petri nets (DPNs) by adjusting [35,43] to our needs.In particular, our definition needs to accommodate the fact that a monitored trace will be matched against multiple process components (which will be the focus of Section 5).
Let  be a finite set of event signatures.  is the language of guards over , where a guard is a boolean combination of atomic formulas  ⊙ .We can then specialize the notion of satisfaction to guards: given an assignment  ∶   → R and an atomic condition  ⊙ , we have that  ⊧  ⊙  iff () ⊙ .Boolean combinations of atomic conditions are defined as usual.We denote by  () the set of attributes mentioned in a guard .Definition 2. A Petri net with data and variable-to-constant conditions (DPN) over a set  of event signatures is a tuple  = ( ,  ,  , ,  , , ), where: • ( ,  ,  ) is a Petri net graph, where  and  are two finite disjoint sets of places and transitions, and  ∶ ( × )∪( × ) → N is the net's flow relation;  (𝑡).We denote transition firing as (, )[, ⟩( ′ ,  ′ ).State ( ′ ,  ′ ) is reachable from (, ), if there exists a sequence of transition firings from (, ) to ( ′ ,  ′ ).
In this paper, we deal only with DPNs that are safe and well-formed (over their respective set of event signatures ).The former means that in every reachable state, each place can have at most one token.This is done for convenience (our approach seamlessly works for bounded nets).The latter means that transitions and event signatures are compatible, that is: (i) for every  ∈  with () = , where ⟨, ⟩ ∈ , we have that    () = ; (ii) for every  ∈  with () = , net variables are left untouched, that is, () ≡ ⊤.The first requirement captures the intuition that the payload of an event is used to update the net variables, provided that the corresponding write guard is satisfied.The second requirement indicates that variables are only manipulated when a visible transition, triggered by an event, fires.
Consider a DPN with initial state and final marking (DPNIF), denoted as D = (, ( 0 ,  0 ),   ), where  is a DPN, ( 0 ,  0 ) is the initial sate and   is the state of  (called initial state), and   a marking of  (called final marking ).A run of D is a sequence of transition firings of  that starts from ( 0 ,  0 ) and finally leads to a state (, ) with  =   .
Let us now define when a trace complies with a DPNIF.This captures that the events contained in the trace can be turned into a corresponding run, possibly inserting -transitions, while keeping the relative order of events and their correspondence to elements in the run.To do so, we need a preliminary notion.Given two sequences  1 and  2 such that | 2 | ≥ | 1 |, an order-preserving injection  from  1 to  2 is a total injective function from the elements of  1 to those of  2 , such that for every two elements  1 ,  2 in  1 where  2 comes later than  1 , we have that ( 2 ) comes later than ( 1 ) in  2 .This notion allows to map traces into (possibly longer) runs of a DPNIF.Definition 3. A trace  =  1 ⋯   complies with a DPNIF D with labeling function  if there exists a run  of D and an order-preserving injection  from  to  such that: • for every  = ⟨, ⟩ in  s.t.() = [, ⟩, we have that () =  and  corresponds to  for the written variables    () 2 ; • every element [, ⟩ in  that does not correspond to any element from  via  is so that () = .

Monitoring approach
In this section we provide our main technical contribution: the construction of monitors for hybrid processes.In our context, a hybrid process  over a set  of event signatures is simply a set of process components, that is, LMP-Declare constraints and DPNIFs over .Then, monitoring a trace against  basically amounts to running this trace concurrently over all the DPNIFs of , simultaneously checking whether all constraints in  are satisfied.When the trace is completed, it is additionally checked that the trace is indeed accepted by the DPNIFs.An important clarification, when characterizing the concurrent execution over multiple DPNIFs, is that such components may come from different sources, not necessarily employing all the event signatures from .In this light, it would be counterintuitive to set that a DPNIF rejects an event because its signature is not at all used therein.We fix this by assuming that whenever such a situation occurs, the DPNIF simply ignores the currently processed event.
Given this basis, the construction of monitors for such hybrid processes goes through multiple conceptual and algorithmic steps, detailed next.

Interval abstraction
The first challenge that one has to overcome is related to reasoning with data conditions, that is, checking whether a condition is satisfied by an assignment, and checking whether a condition is satisfiable (both operations will be instrumental when constructing automata).The main issue is that, due to the presence of data, there are infinitely many distinct assignments from variables/attributes to values, which induce infinitely many states to consider in the DPNs (even when the net is bounded).We deal with this infinity by building on the faithful abstraction techniques studied in [35], recasting them in our more complex setting.The idea is to avoid referring to single real values, and instead predicate over a fixed number of intervals, which in turn leads us to propositional reasoning.This comes from the observation that data conditions can distinguish between only those constants that are explicitly mentioned therein; hence, it suffices to consider only the constants used in the process components (i.e., some atomic condition, guard, or initial DPN assignment) to delimit the intervals to consider.
Technically, let  = { 1 , … ,   } be a finite set of values from R assuming, without loss of generality, that   <  +1 , for 1 Notice that   is finite, with a size that is linear in , and can be also seen as a fixed set of propositions, which is crucial for our approach.Each interval in the partition is an equivalence region for the satisfaction of the atomic formulas in the following sense: given two valuations  and  ′ defined over , such that () and  ′ () are from the same region  ∈   , then  ⊧  ⊙  if and only if  ′ ⊧  ⊙ .
We exploit this as follows.We fix a finite set  of variables (referring to attributes) and lift an assignment  ∶  → R into a corresponding region assignment α ∶  →   so that, for every  ∈  , α() returns the unique interval to which () belongs.We can then use α to check whether a formula holds over .For example, () satisfies  >  with  ∈  if and only if α() = ( 1 ,  2 ) with  1 > .The same reasoning can be similarly done for other comparison operators and carries over boolean combinations of atomic formulas.The key 2 Recall that  involves both read and written variables.The read variables are used to guarantee that the fired transition is enabled, and it is on the written variables that  and  must agree.observation here is that doing this check amounts to propositional reasoning, and so does checking satisfiability of atomic formulas: in fact, since both  and   are finite, there are only finitely many region assignments that can be defined from  to   .
Given the process components of interest, we fix  to the set   of all the attributes in the event signature  of the system under study (this contains all variables used in its process components), and  to the set of all constants used in the initial states of the DPNs, or mentioned in some condition of a process component.We then consistently apply the lifting strategy from assignments to region assignments, when it comes to traces and DPN states.In the remainder, we assume that  and  are fixed as described above.

Encoding into guarded finite-state automata
To capture the execution semantics of process components, we introduce a semi-symbolic automaton whose transitions are decorated with boolean combinations of atomic formula.For ease of reference, given an event signature , we denote with   the language of such boolean combinations.GFA-runs of  consist of finite sequences of the form  0 ⟶   such that   ⊧   , for 1 ≤  ≤ .In general, an event  can satisfy the guards of many transitions outgoing from a state , as guards are not required to be mutually exclusive.Thus, a trace may correspond to many GFA-runs.In this sense, GFAs are, in general, nondeterministic.
It is key to observe that GFAs can behave like standard finite-state automata.In fact, by setting  to a finite set of constants including all those mentioned in the automata guards, we can apply the interval abstraction from Section 5.1.In particular, in place of considering the infinitely many events over , we can work over the finitely many abstract events defined using region assignments over   .For example, we can check whether a trace  = ⟨ 1 ,  1 ⟩ ⋯ ⟨  ,   ⟩ is accepted by  by checking whether the abstract trace ⟨ 1 , ν1 ⟩ ⋯ ⟨  , ν ⟩ does so.Notice that, to construct this abstract trace, it suffices to represent each event ⟨, ⟩ in  using equivalence regions from   , s.t.every From here, we denote the abstract trace as    .
Thanks to this, we can build GFAs using standard automata techniques (e.g., for LTL  , as discussed below), and directly apply standard algorithms, coupled with our interval abstraction, to minimize and determinize GFAs.
From LMP-Declare constraints to GFAs.Finite-state automata have been extensively used for monitoring of LTL  formula and, in particular, Declare constraints [15,18].In our context, the latter are represented using GFAs in which guards carry only propositions referring to activity names.In the case of LMP-Declare, the guards are formulas of   which, however, thanks to the interval abstraction, can be represented using a finite number of proposition.Hence, we can build on top of the standard finite-state automaton construction for a Declare constraint [15], in which transitions are labeled by the formulas from   mentioned within the constraint.We keep only those transitions whose labels are satisfiable formulas (as discussed in Section 5.1, satisfiability checking is done via propositional reasoning).Lastly, it is important to mention that the obtained GFA is kept complete so that each of its states can process every event of a signature from .
From DPNIFs to GFAs.Next, we discuss the key aspects of constructing a GFA which accepts all and only those traces that comply with a given DPNIF D = (, ( 0 ,  0 ),   ).
The main issue is that the set  of DPN states reachable from ( 0 ,  0 ) is in general infinite even when the net is bounded (i.e., has boundedly many markings).This is due to the existence of infinitely many valuations for the net variables.The infinity can be tamed using the partitioning strategy defined above, which induces a partition of  into equivalence classes, according to the intervals assigned to the variables of .Formally, given two assignments ,  ′ we say that  is equivalent to  ′ , written  ∼  ′ , iff for every  ∈  there exists  ∈   s.t.(),  ′ () ∈ .Then, two states (, ), ( ′ ,  ′ ) ∈  are said to be equivalent, written (, ) ∼ ( ′ ,  ′ ) iff  =  ′ and  ∼  ′ .Observe that the assignments of two equivalent states satisfy exactly the same net guards.By [] ∼ , we denote the quotient set of  induced by the equivalence relation ∼ over states defined above.
Based on Section 5.1 we directly get that [] ∼ is finite.We can then conveniently represent each equivalence class of [] ∼ by (, α), explicitly using the region assignment in place of the infinitely many corresponding value-based ones.This provides the basis for the following encoding.
Definition 5 (GFA Induced by a DPNIF).For a given DPNIF D = (, ( 0 ,  0 ),   ) with  = ( ,  ,  , , , ), the GFA induced by D is Next, we introduce an algorithm that, given a DPNIF D = (, ( 0 ,  0 ),   ), constructs the GFA   corresponding to it (that is, it represents all possible behaviors of D).In the algorithm, we make use of the following functions: • (, α) returns a set of transitions and region assignments Here β matches only the ''allowed'' regions.That is, for every , we construct multiple β that account for all possible combinations of equivalence regions assigned to each variable in () and () s.
It is easy to see that the above functions are computable.For  there are always finitely many combinations of regions from  R satisfying the guards of , whereas formulas produced by  can be constructed using a version of the respective procedure from Definition 5 that uses β instead of α′ , and next states returned by   can be generated using the DPN firing rule, proviso that it has to be invoked in the context of equivalence regions.

Theorem 1. Algorithm 1 effectively computes a GFA  𝐷 induced by a DPNIF D, is sound and terminates.
However, it is important to notice that, Algorithm 1 is not guaranteed to produce GFAs that are complete.To ensure the completeness of the algorithm output, the following additional modifications have to be performed.

(M1)
Given that the algorithm treats -transitions as normal ones, we need to compile them away from the so-obtained GFA   .This is done via the standard procedure for finite-state automata (see, e.g., [33]) adopted to our setting.This procedure allows to collapse sequences of states in which each  ∈  is s. states and by adding transitions to predecessors of  (if any) as well as  ′′  .While the -transition removal procedure produces deterministic automata, its counterpart working with GFAs may produce an automaton that is non-deterministic.

(M2)
The output GFA has to be made ''tolerant'' to events whose signature is not at all used in the DPNIF.This is done by introducing extra loops in →  from Definition 5 as follows: for every  ∈   we insert a looping transition   ⟶ , where  = ⋀ ∈(  ⧵ ⋃ ∈ ()) .Like that, the GFA can skip irrelevant events that could never be processed by the net.

(M3)
The resulting GFA has to be extended with two types of extra transitions.
• The first one tackles invalid net executions where a partial run cannot be completed into a proper run due to a data-related deadlock.This can occur due to the violation of read versus write guards of all transitions that, from the control-flow perspective, could actually fire.Now, whenever we get a complete GFA for a DPNIF, we can use the former to check whether a log trace is compliant with the net.

Combining GFAs
Given a hybrid process  with  components, we know from Section 5.2 how to compute a GFA   for each component ℎ  of .Such GFA is addition minimized and determinized, which means that, being   complete, it will have a single trap state capturing all traces that permanently violate ℎ  .
To perform monitoring, we follow the approach of colored automata [15,40] and label each automaton state  of   with one of four truth values, respectively indicating whether the corresponding process component is temporarily satisfied (TS), temporarily violated (TV), permanently satisfied (PS), or permanently violated (PV) in .As for constraints, these values are interpreted exactly like in [15,40].As for DPNIF, TS means that the current trace is accepted by the DPNIF, but can be extended into a trace that is not, while TV means that the current trace is a good prefix of a trace that will be accepted by the DPNIF (PV and PS are defined dually).At the level of   , we define a labeling function   as follows: (i)   () =   iff  ∈  and all transitions outgoing from  are self-loops; (ii)   () =   iff  ∈  and there is some transition outgoing from  that is not a self-loop; (iii)   () =   iff  ∉  and all transitions outgoing from  are self-loops; (iv)   () =   iff  ∉  and there is some transition outgoing from  that is not a self-loop.
The so-obtained labeled GFAs are local monitors for the single process components of .To monitor  as a whole and do early detection of violations arising from conflicting components, we need to complement such automata with a global GFA , capturing the interplay of components.We do so by defining  as a suitable product automaton, obtained as a cross-product of the local GFAs, suitably annotated to retain some relevant information.
Technically,  = ⟨,  0 , →,  ⟩, where: and  is satisfiable by exactly one event; (iv)  =  1 × ⋯ ×   .Notice that guards of  must be satisfiable by some event.Otherwise, related labeled transitions are omitted as there is no event to trigger them.

Best event identification
It is crucial to notice that, differently from local GFAs, the global GFA  is not minimized.This allows the monitor to distinguish among different combinations of permanently violated components, in turn allowing for fine-grained feedback on what are the ''best'' events that could be processed next.To substantiate this, we pair a hybrid process  with a violation cost function that, for each of its components, returns a natural number indicating the cost incurred for violating that component.
To augment  with costs, we associate each of its states  ∈  with two cost indicators: (i) a value   (), which contains the sum of the costs associated with the constraints violated in ; (ii) a value   (), which contains the best value   ( ′ ), Assuming that   is the cost associated with the violation of constraint   , we compute   and   as follows: 1. for every state where (  ) = 0 if   ∈   , and   otherwise; 2. repeat the following until  +1  () =    (), for all  ∈ : It is immediate to see that a fixpoint is eventually reached in finite time and the algorithm terminates.To this end, observe that, for all  ∈ ,   () is a non-negative integer.Moreover, at each iteration   () can only decrease.Thus, after a finite number of steps, the minimum   () for each state  ∈  is achieved, which corresponds to the termination condition.
We can also see that the algorithm is correct, i.e., that if   () =  then, from : (i) there exists a path in  to some state  ′ s.t.  ( ′ ) = , and (ii) there exists no path to some state  ′′ s.t.  ( ′′ ) < .These come as a consequence of step 2 of the algorithm.By this, we have that, after the th iteration   () contains the value of the state with the minimum cost achievable from  through a path containing at most  transitions.When the fixpoint is reached, it means that even considering longer paths will not improve the value, i.e., the value is minimal.
Using ,   and   , we can find the next ''best events'', i.e., those events that allow for satisfying the combination of constraints that guarantees the minimum cost.Technically, let  =  1 ⋯   be the input trace and consider the set  = { 1 , … ,   } of the runs of  on .Let   be the last state of each   and let q = arg min ∈{ 1 ,…,  } {  ()}.
If, for some , q =   , then q is the best achievable state and no event can further improve the cost.Otherwise, take a successor  ′ of q s.t.  ( ′ ) =   ( q).Notice that by the definition of   , one such  ′ necessarily exists, otherwise q would have a different cost   ( q).The best events are then the events  * with  * ⊧ , for  s.t.  ⟶  ′ .Notice that, to process the trace in the global GFA , and to detect the best events, we again move back and forth from traces/events and their abstract representation based on intervals, as discussed in Section 5.1.In particular, notice that there may be infinitely many different best events, obtained by different attribute assignments within the same intervals.

Evaluation of the monitoring approach
In this section we first provide three concrete examples of the output of the monitoring approach from Section 5, which are then followed by an extensive set of scalability experiments.The source code of the corresponding proof-of-concept implementation along with all input files used in this section are publicly available at https://git.io/JM0iA.

DPN guards
The results from Section 6.2.1 show that considering the data perspective can increase the computational complexity significantly.To understand better that increase, we use the largest specifications from Table 1 ('Test no.6', 'Only Control Flow') as the baseline.For each test, we systematically added read and write guards to both DPNs, constraining the behavior of one of the otherwise free-choice decision points in the control flow.The first set of tests focuses on the number of guards by first constraining the behavior of the earliest decision-point of both DPNs (Test no.1), then the second earliest (Test no. 2), then the third earliest (Test no.3), etc. until the behavior of all decision points is fully guarded (Test no. 7).The second set of tests follows the same pattern, but instead of adding extra guards, we move the same guards forward by one decision point in each test (see Table 2).
As expected, both the time required for constructing the GFA as well as the number of its states increase as the number of guards grows.Notice that the increasing number of states is not only due to the additional guards in the specifications, but also due to their positions moving more towards the end of the process models.This is confirmed by the second set of tests where the number of GFA states (and also required time) increases in each test despite no additional guards are added (only the position of the guards changes).This indicates that data dependant decision points should be placed as early as possible in the control flow to increase the performance of our approach.

Synchronization activities
In our framework, process specifications can be connected both via declarative constraints as well as activities they share.Each of such activities becomes a synchronization point between the specifications that contain it, since the respective GFA will be able to progress on this activity so that violations are avoided only when executions of all involved specifications containing it will reach a state in which this activity can be executed simultaneously by all such specifications.We performed two sets of tests to check the effect of synchronization points There, the overlapping activities were always placed between repetitions of the original control flow pattern and unique names were used for each added activity.The first set of tests focuses on the number of synchronization points by adding one additional overlapping activity in each test, whereas the second set concentrates on the position of synchronization by moving a single overlapping activity towards the end of the control flow.As expected, both the time required for constructing the GFA and also the number of states in the GFA decrease as the number synchronization points increases.A single synchronization point near the beginning of the process model reduces the number of GFA states by 2835 ('Test no.0' vs. 'Test no.1').This trend becomes more significant when more synchronization points are added.With 6 synchronization points (Test no.6), the number of GFA states decreases by ∼ 37%.However, the placement of the synchronization points also affects the resulting GFA (specifically, the trend is reversed when compared to DPN guard placement): a single synchronization point near the end of the control flow reduces the number of GFA states by ∼ 28% ('Test no.0' vs. 'Test no.6'), which is better than having four synchronization points at the beginning of the control flow in the first set of tests.

Number of input specifications
For the sake of completeness, we also present updated results on the scalability w.r.t. the number of input specifications already presented in [2], i.e., we increase the number of procedural specifications by creating copies of the same DPN (with renamed activities/attributes), and we connect each pair of consecutive copies with a declarative process specification consisting of a single constraint.More specifically, we use the VT DPN (Section 2) and a Not Co-Existence constraints between the WT activities of two consecutive copies.'Test no.0' contains only the original VT DPN, while each following test adds one copy of the VT DPN and one Declare constraint containing one Not Co-Existence constraint.As in Section 6.2.1, the tests are performed both with and without considering the data perspective (see Table 4).
As in [2], the number of GFA states rapidly increases together with the number of models in input specifications.Interestingly, the inclusion of the data perspective seems to have a greater impact here than in the previous tests, both in terms of the number of GFA states and time requirements.However, as in [2], the average time required for processing the events, while slightly increasing, remains nearly instantaneous in all the tests.

Related work
We provide an account of related work by considering approaches for modeling and monitoring processes specified using procedural and declarative approaches.We point out the lack of hybrid proposals that cover the modeling complexity addressed in our work while providing monitoring capabilities.We give particular consideration to relevant contributions developed within the medical domain and dealing with clinical guidelines.
Monitoring declarative specifications.Our approach has its root in the formal approach to monitoring through runtime verification [36].This amounts to take a declarative specification of a property of interest and automatically construct a correct-by-design monitor that checks the state of a property on the basis of the trace monitored so far and of all its possible continuations.This realizes a form of reactive monitoring that is complementary to predictive monitoring [23].Predictive monitoring implicitly learns from historical (complete) traces a function mapping trace prefixes to some indicator about the (most likely) future outcome(s).As surveyed in [23], the most common indicators are predictions to categorical or numerical values (such as the expected overall cost or completion time of an instance), or to sequences of future events (guessing the most likely next steps of an instance).Hence, while predictive monitoring can learn a probability distribution over the (most likely) next steps given a monitored partial trace, reactive monitoring exhaustively reasons on all its (infinitely many) possible continuations.
Traditionally, runtime verification deals with declarative specifications expressed linear- [6] and branching-time [28] logics interpreted over infinite traces/computation trees.As said, the monitoring output is emitted by the monitor not only depending on the trace monitored so far, but also on its possible continuations and how they interact with the monitored property.Dealing with continuations of infinite length poses the challenge that not all properties are monitorable, in the sense that the monitor may not be able to determine to which output a given trace should be mapped.This is related to the intrinsic nondeterminism of -structures and, in turn, calls for an extensive investigation aimed at singling out the largest monitorable fragments of the considered temporal logic [1,6].
A radically different spectrum emerges when continuations have a finite (though unbounded) length, as considered in this work.In this case, specifications expressed in LTL  and linear dynamic logic over finite traces [18] correspond to properties of regular languages and are always monitorable.More specifically, an arbitrary property can be encoded into a deterministic monitor expressed as a conventional finitestate automaton, enriched with a labeling that maps every automaton state into a single, provably correct monitoring output [16,20,40].This constitutes the technical backbone of our approach, and has in fact been already applied to monitor (propositional) Declare specifications.In particular, [40] first showed how to deal at once with monitoring of single constraints and their interplay, while [42] continued [40] by showing how this automata-theoretic approach can be used to return fine-grained feedback on conflicts involving multiple constraints.This approach has been further developed in [16,20], formally proving the correctness of this approach, while also showing how to monitor metaconstraints that predicate on the monitoring state of other constraints (e.g., expressing compensation properties on what is expected to hold when a given Declare constraint is permanently violated).The Declare language has also been extended to account for multiple perspectives by considering activity payloads and data-aware constraints, with corresponding monitoring approaches defined either posing assumption on the data domain [21], or accepting incompleteness of the emitted verdict [39].None of these works has considered a data-aware extension of Declare that indeed admits finitely representable monitors, which we do here focussing on LMP-Declare.Recent works have pushed the boundaries of data-aware extensions of LTL  constraints [26,27] for which monitors can be constructed.Such approaches subsume LMP-Declare but have not considered the immersion of declarative constraints inside hybrid specifications, as done here.
Monitoring Petri nets.Many Petri net-based monitoring approaches focus on diagnosis of discrete event systems, such that with each observed string of events the diagnosis procedure at hand associates a diagnosis state (such as 'normal', 'faulty' or 'uncertain').One of the first monitoring approaches dealing with Petri nets is studied in [55].There, the author proposes an on-line fault detection technique based on monitoring the number of tokens in P-invariants that also allows to ''predict'' the future system behavior whenever the fault is being detected.A similar idea is also used in [32], where the authors extend original system models with additional places so as to capture Pinvariants allowing to detect and isolate system failures.Notice that in both of these works the failure states are ''hardwired'' in the system models and failures are detected only when loss/duplication of tokens happens in related places.Another seminal work [60] considers system Petri net models without failures and treats each event occurrence that does not match firing conditions properly as a failure.There are also works related to the domain of workflow management systems.For example, [54] proposes a workflow management system that encompasses workflow monitoring and delay prediction modules based on resource-aware Petri nets.
There are many more monitoring/diagnosis approaches based on Petri nets.One of the main features they have in common is that the system is represented as a single monolithic process (although there are works, such as [29,30], that study monitoring approaches for distributed/modular specifications, synchronizations between modules therein have to be encoded a priori), which can be insufficient in domains with highly flexible and knowledge-intensive processes.
Conformance and compliance checking for CIGs.The interplay of multiple process specifications (both procedural and declarative) has however been addressed to some extent from the perspective of conformance checking.A recent work [24] studies the conformance checking task for mixed-paradigm process models that integrate Petri nets and Declare constraints.However, this setting does not consider any activity payloads and, as customary in conformance checking, the authors focus on alignment of complete traces and not on (runtime) monitoring of ongoing incomplete process executions.There are two other research lines that consider multiple process and constraint specifications, both related to the medical domain and often studied in the context of devising adequate treatment for comorbid patients.The first line focuses on interactions between CGs and BMK from the view-point of the conformance checking problem [9] (i.e., how to handle cases where the non-conformance with respect to a clinical guideline is caused by the correct application of basic medical knowledge).The second one studies the same interactions within the GLARE framework, but from the perspective of explainability [52,53,62] (i.e., how can the actions taken during the treatment of a specific patient be automatically explained given the presence of multiple CGs and BMK).While these approaches to a certain extent consider the interplay of procedural and declarative models, their respective tasks of interest are performed on historical data and do not consider streams of events.Interestingly, regardless the main distinction between our on-line monitoring framework and the a posteriori analysis studied along the research lines mentioned above, there are some points in common.First of all, in order to account for all possible interactions between actions, both approaches rely on structures that compute all such interactions explicitly.The only difference lies in their type: while our approach has to account for all possible states of the system, the approach in GLARE would focus only on concrete CIGs that are relevant to a given patient.GLARE is also able to merge parts of multiple CIGs in order to provide adequate treatments for given comorbid patients.Currently, our monitoring approach does not support this feature, but we plan to work on an automated generation of possible ''repair'' strategies that could be then selected by the user.Finally, in [10], the authors discuss how GLARE can be used to account for concurrent executions of CIGs being simultaneously applied to one patient, and how to treat exceptions raised by incompatible CIGs.Such exceptions are pre-computed and stored in a special knowledge base.This is very similar to our approach in which we create a product automaton, accounting for all possible interactions between all the hybrid specification components, and assigning to each of its states a violation/satisfaction label.
Formalisms for representing CIGs.Many different formalisms and notations have been developed to represent CIGs.The vast majority adopts a flow-chart paradigm, with atomic activities composed through sequencing, choices, concurrency operators, and more complex control-flow patterns [14,48].Several efforts have been made towards formalizing the core elements of such proposals in a notation-agnostic way, by employing variants of (colored) Petri nets [7,31,49].This relates directly to our approach, where procedural process components are represented using data Petri nets, which form a well-behaved fragment of colored Petri nets, amenable to (runtime) verification.
The long-standing debate on how to infuse flexibility in process modeling languages and management systems [57] has pervaded research on CIGs as well.Several approaches have been devised to provide a more flexible execution of CIGs, ranging from adaptive runtime support [58] to exception handling [56].At the same time, flexibility by design has also been studied through declarative approaches, in particular adapting Declare to CIGs [45].This proposal is subsumed by the extension of Declare covered in our work.
Given this account, domain-specific hybrid languages combining notations for procedural and declarative CIG modeling could be naturally studied starting from our contribution.
Hybrid models.In addition to the aforementioned approaches, research in relation to hybrid business process representations (HBPRs) is currently ongoing.The term hybrid refers, in this case, to combining declarative and procedural modeling paradigms into a unified modeling approach which would allow expressing both strict and flexible aspects of a single process in the same model.A conceptual framework and a common terminology for these types of models has been proposed recently [3] and a number of open research challenges related to HBPRs have been identified [61].Some process mining approaches for HBPRs [22,41,59] have also been developed.However, to the best of our knowledge, there are currently no monitoring approaches suitable for hybrid settings.

Conclusion
The ability to monitor the interplay of different process models is useful in domains where process instances tend to have high variability.An example of this is the medical domain where standard treatment procedures are described as clinical guidelines and multiple guidelines need to be often executed simultaneously, therefore giving rise to interplay and possible conflicts.Furthermore, because a clinical guideline cannot account for all possible preconditions that a patient may have, it is also necessary to employ declarative knowledge (such as allergies, prior conditions etc.) which further complicates the process execution.
This paper proposes and formalizes a monitoring approach that can take into consideration the interplay of multiple process specifications (both procedural and declarative), and can anticipate possible violations that may occur when executing such specifications all together.Such anticipation can help either to avoid violations or (if avoiding the violations is not possible) to minimize their effect on the whole execution (by considering a total violation cost computed using special violation costs assigned to every single specification).The proposed approach is limited in that it can only provide recommendations ''locally'' by considering only immediate next events.However, in the future we plan to extend our technique with the ability of providing non-local recommendations on continuations of the trace, which is readily supported by automata-based techniques.We also want to explore different possible execution semantics for concurrent execution of multiple process models, in particular for what concerns local and shared activities.
Another, natural continuation of this work is to adopt a more sophisticated language to express conditions on data attributes, going beyond variable-to-constant comparisons adopted here.We plan to do so by exploiting recent results on data Petri nets and LTL  [25][26][27].However, the more general languages adopt therein call for data abstraction techniques that are more sophisticated than the interval-based propositionalization approach used here.Hence, extending along this direction calls for further investigation, especially from the algorithmic point of view.
To test our approach in practice, we performed various scalability experiments in a proof-of-concept implementation of the proposed approach.On the one hand, they revealed good performance in terms of monitoring incoming events, with average event processing times remaining near-instantaneous in all of the tests.On the other hand, the experiments demonstrated that the main limiting factor is, as expected, the construction of the complete hybrid specification used for monitoring, which requires considerable amount of time and memory and, consequently, limits the size and number of input procedural and declarative specifications.However, we note that these drawbacks can be overcome by incorporating the optimization techniques widely investigated in the automata construction for LTL  specifications, such as those in [5,17,37,63].These can indeed be all integrated directly in our approach.Another possibility is to shift some of the computational complexity towards event processing, which is per se very fast and only mildly affected by both the size and number of the input specifications.Furthermore, even without these improvements, the proposed approach is already able to handle processes of realistic, though relatively small, sizes (∼ 60 activities across process specifications).

Fig. 2 .
Fig.2.DPN representations for the peptic ulcer (left) and venous thromboembolism (right) clinical guideline fragments.We use prefixes r: and w: to distinguish read and write guards respectively.Trivial, true guards are omitted for brevity.
t. there are  runs   ⟶ ⋯  ⟶  ′  (where every transition is labeled with ) and  ′   ⟶  ′′  , with  ≠  and 1 ≤  ≤ .The removal is done by replacing each  ∈  with the set of all the  ′ is a total labeling function assigning a label from   ∪ {} to every transition  ∈  , with  denoting a silent transition;•  ⊆   is the set of net's variables;•  ∶  →   and  ∶  →   are two read and write guardassignment functions, mapping every  ∈  into a read and write guard from   .We respectively call    () and    () the sets of 's read and write variables.Given  ∈  ∪  , the preset and the postset of  are, respectively, the sets •  = { |  (, ) > 0} and  • ∶= { |  (, ) > 0}.Fig. 2 shows two DPNs encoding the clinical guideline fragments discussed in Section 2. The two figures employ string constants, which can be easily encoded into dedicated real numbers to fit our formal definition. ; (2)  ∶  → R is a total variable valuation assigning a real value to every variable in  .A DPN can progress from one state to another if one of its enabled transitions fires.A transition is enabled if all its input places contain sufficiently many tokens to consume and both its read and write guards are satisfied.More formally, given a a DPN  = ( ,  ,  , ,  , , ), transition  ∈  is enabled in state (, ) under partial valuation  ∶  ↛ R, denoted (, )[, ⟩, iff: (1)  is defined on all variables  ∈    () ∪    (); (2) () = (), for every  ∈    (); (3)  ⊧ () and  ⊧ (); (4) () ≥  (, ), for every  ∈ • .Then, if  ∈  is enabled in state (, ) under , it can fire and produce a new state ( ′ ,  ′ ) s.t.(1)  ′ We turn to the DPN execution semantics.A state of a DPN  = ( ,  ,  , ,  , , ) over  is a pair (, ), where: (1)  ∶  → N is a total marking function, assigning a number () of tokens to every place  ∈ To deal with this issue, for every state (, α) ∈   and every transition  ∈   , if α ̸ ⊧ () and () ≥   (, ) for every  ∈   , then we add (, α)  , where  is as in Definition 5 (see 3).This is only done when  is satisfiable. , where, for  ∈   and every  ∈ •  s.t.() ≥  (, ),  = () and  is as in Definition 5 (see 3), with all   being computed now for a combination of equivalence regions from   , each of which is composed into (partial) variable region valuation β ∶  →   s.t.β ̸ ⊧  ⊙ , for every  ⊙  in ().This is only done if  is actually satisfiable.This step can be also optimized by putting all such  in one DNF formula, which in turn reduces the number of transitions in the GFA.
⟶ (, α) to → • The second one addresses write-related issues arising when the event to be processed carries values that violate all the write guards of candidate transitions.We handle this as follows: for every (, α) ∈   and every  ∈   s.t.() ≢ ⊤, add (, α) ∧ ⟶ (, α) to →

Table 1
Scalability wrt the size of input specifications.

Table 2
Scalability wrt the number of guards and position of a guard in DPN control flow.
constriction time) start to quickly ramp up once the input specifications become larger.The average processing time of each incoming event remains below 2 ms in all tests.

Table 3
Scalability wrt the number of synchronizations and position of a synchronization in DPN control flow.

Table 4
Scalability wrt the number of input specifications.Results for both sets are reported in Table3make use of the same model from Table 1 ('Test no.6', 'Only Control Flow') as baseline.