Formal foundations for responsible application integration

Enterprise Application Integration (EAI) constitutes the cornerstone in enterprise IT landscapes that are characterized by heterogeneity and distribution. Starting from established Enterprise Integration Patterns (EIPs) such as Content-based Router and Aggregator, EIP compositions are built to describe, implement, and execute integration scenarios. The EIPs and their compositions must be correct at design and runtime in order to avoid functional errors or incomplete functionalities. However, current EAI system vendors use many of the EIPs as part of their proprietary integration scenario modeling languages that are not grounded on any formalism. This renders correctness guarantees for EIPs and their composition impossible. Thus this work advocates responsible EAI based on the formalization, implementation, and correctness of EIPs. For this, requirements on an EIP formalization are collected and based on these requirements an extension of db-net, i.e., timed db-net , is proposed, fully equipped with execution semantics. It is shown how EIPs can be realized based on timed db-nets and how the correctness of these realizations can be shown. Moreover, the simulation of EIP realizations based on timed db-nets is enabled which is essential for later implementation. The concepts are evaluated in many ways, including a proof-of-concept implementation and case studies. The EIP formalization based on timed db-nets constitutes the first step towards responsible EAI. © 2019TheAuthor(s).PublishedbyElsevierLtd.ThisisanopenaccessarticleundertheCCBYlicense


Introduction
With the growing number of cloud and mobile applications, the importance of Enterprise Application Integration (EAI) [1] has immensely increased.Integration scenarios -essentially compositions of Enterprise Integration Patterns (EIPs) [2] and their recent extensions [3][4][5][6] -describe typical concepts in designing messaging systems as used for EAI (e.g., the communication between these applications).Due to the increasing heterogeneity of endpoints and their distribution, trust into productive integration solutions becomes even more essential.This, in turn, requires to provide means for a responsible development of integration solutions (cf."responsible programming" [7]) in order to avoid design flaws such as functional errors or incomplete functionality, starting with the EIPs.
Similar to Cerf [7] we say that users who define integration solutions should have a clear sense of responsibility for their reliable operation as well as their resistance to compromise and error.This implies that the integration logic can only be trusted if, in principle, it is possible to prove that it behaves correctly.In turn, this means that users should be able to express and specify what their integration solution should do.We call a method that takes this into account a responsible development of integration solutions.The main properties of this principle are a formal treatment of integration patterns (i.e., their formalization within a framework with clearly defined syntax and semantics, and that supports formal analysis of its models), simulation and validation.We conjecture that this will become one of the key principles for the development in shared responsibility environments like cloud or mobile computing.Moreover, it will constitute the foundation for functionally correct compositions and correctness-preserving improvements (e.g., optimizations).
Contemporary EAI system vendors use many of the EIPs as part of their proprietary integration scenario modeling languages [3].However, these languages are not grounded on any formalism and, hence, may produce models that are subject to the design flaws mentioned above.Due to the missing formal definition, currently the detection and analysis of these flaws are by large performed manually.This results in system engineers putting in huge effort that may potentially lead to mistakes.Hence, nowadays, EIPs should be regarded as a set of informal design solutions rather than a collection of models produced by a formal language and whose correctness can be verified, thus leaving the EAI vendors with their own proprietary semantics and not https://doi.org/10.1016/j.is.2019.1014390306-4379/© 2019 The Author(s).Published by Elsevier Ltd.This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).allowing for a responsible development.Our recent survey [3] identified a first attempt towards formalization of some EIPs using colored Petri nets (CPNs) [8].Although the CPN colors abstractly account for data types, and CPNs support the control flow through control threads (i.e., tokens) progressing through the net, carrying data conforming to colors, they cannot be used to model, query, update, and reason on requirements inherent to the extended EIPs [2][3][4][5][6] such as persistent data or timings.
To overcome these limitations, a responsible development of integration solutions, i.e., solutions that can be thoroughly tested for their correctness at design time, requires the formalization of its pattern foundations.Therefore we adopt a responsible pattern formalization process that covers the following objectives: (i) defining the execution sematnics of EIPs and (ii) their realization, (iii) simulation of the EIP realizations as well as (iv) their validation and verification.Fig. 1 shows this process with its three main steps that we subsequently discuss: pattern formalization, pattern implementation, pattern correctness.
Definition The definition of a pattern starts with capturing and specifying its semantics.With a thorough understanding of the pattern and its variations, it can be formally represented.The resulting formal pattern model can be analyzed and verified (i.e., model checking).With model checking capabilities, errors in patterns can be found and either their semantics or formal representation is revisited.
Implementations If model checking is not possible or difficult, the formal patterns can be implemented, configured and simulated in a suitable tool.The simulation not only bridges the implementation gap, but allows for an experimental validation of a pattern.

Correctness
The correctness of a pattern can be decided according to its semantics, when put into the context of a dedicated, scenario-specific configuration, and a test design, which specifies the desired properties like the expected output of a pattern simulation, for a given input.This test design is instantiated and checked during the simulation of the pattern.Any flaws found during this step can result into another round of formal or implementation adjustments.
We argue that existing approaches do not fully support a responsible development and hence the following research questions are formulated to guide the design and development of an EIP formalization process living up to objectives (i)-(iv): Q1 Which EAI requirements are relevant for the formal definition of EIPs?To which extent are they met by existing approaches?
Q2 How to design an EIP formalization to meet relevant EAI requirements and objectives (i)-(iv)?Q3 How to realize the EIPs and real-world integration scenarios?Q4 How to accomplish correctness testing and simulation of EIP realizations?
The conference paper [9] that is extended in this work has provided the foundations for Q1-Q2 (and partially Q3).The formalization of EIPs is based on db-nets [10] as a database-centric extension of CPNs (with atomic transactions).In [9], db-nets have been extended by EAI requirements such as time, resulting in so called timed db-nets.It has been shown that with timed db-nets a responsible development of integration solutions becomes possible.The following example leverages timed db-nets, illustrating selected EIPs and their requirements on a formalization.Example 1.A commonly used stateful Aggregator pattern [2] is designed to combine a number of input messages into a single output message.In the stateful version, the aggregator also stores collected messages in a persistent storage.The messages can be collected according to a Message Sequence [2], which identifies a group of coherent messages.Fig. 2 shows how the stateful aggregator pattern can be represented using timed db-nets.While the formal aspects will be specified throughout this work (e.g., data, queries, actions, transactions, time), the aggregator's semantics is subsequently described in timed db-net terms.
The aggregator starts with a number of input messages in its input channel that is represented with a place ch in .Every message consists of a unique message identifier msg and an information block data, and will be eventually put into the database (cf.DB schema in Fig. 2 for more detailed database schema description).To collect messages from the input channel and assign them to correct sequences, the net correlates every incoming (msg, data) token to those in place ch p , that, in turn, stores pairs of sequences seq and lists of messages msgs that have been already collected and stored in the persistence storage. 1 If the message is the first in a sequence, new entries, one containing information about the message and another containing data about the referenced sequence, are added to tables called Messages and Sequences, respectively.This is achieved by checking guard condition isFirst(msg, msgs) = true attached to transition T1, and, if the condition holds, by firing that transition and executing action CreateSeq attached to it.The firing of T1 also starts a timer (cf.ch timer ) that in 30 time units can expire the sequence status (this is done by firing transition T 3 that changes the sequence status in table Sequences to expired).If a message is not new in a sequence, then it is inserted into Messages by firing T2 and executing UpdateSeq (see a more detailed definition of updates in the Actions block in Fig. 2).However, the update by UpdateSeq may fail if a message is already in the database or a referenced sequence has already been aggregated due to a timeout (i.e., status is expired).In this case the net switches to an alternative roll-back flow (a directed arc from T2 to ch in ) and puts the message back to the input message channel ch in .
The aggregation of a sequence of messages happens based either on the completion condition (i.e., the sequence status is complete) or on time-out of 30 time units (i.e., sequence status is expired), and is realized by firing transition Aggregate.Since the sequence completion logic is defined depending on a specific pattern application scenario, we use a subnet (a cloud element in the Net block of Fig. 2) denoting a configurable part of the model.Note that the logic must always be realized in the designated subnet since we explicitly guard it with transition T 4 that executes an update (using action Complete) changing a given sequence state.■ In this work we first devise a collection of EAI requirements on the EIP formalization process including a comprehensive assessment and elaboration on the selection of existing approaches, i.e., Petri nets (↦ → Q1).Based on this, a formalism of db-nets is selected and equipped with missing EAI requirements, most prominently time, resulting in timed db-nets (↦ → Q2).The extension also includes a study of suitable time formalisms (including the one we have chosen).The verification of reachability for timed db-nets is shown to be decidable based on an elaborated proof sketch.Next, an instructive catalog of pattern realizations is provided (↦ → Q3).It is shown how to test the correctness of various formal EIP realizations based on their execution traces (↦ → Q4).Finally, a prototypical implementation on top of CPN Tools [11] is used to develop a concept to experimentally test the correctness of EIPs realised in timed db-net.
The contributions to research questions Qx are elaborated following the principles of the design science research methodology described in [12]: "Activity 1: Problem identification and motivation" is based on literature and assessment of vendor-driven solutions (e.g., [3]) as well as on the harvested EAI requirements (i.e., existing catalogs with 166 integration patterns) in a quantitative analysis (cf.Q1)."Activity 2: Define the objectives for a solution" is addressed by formulating objectives (i)-(iv).For "Activity 3: Design and development" several artifacts are created to answer questions Q1-Q4 and meet objectives (i)-(iv), including the formalism for EIPs, the catalog of realizations of EIP formalizations, and the foundations for correctness testing."Activity 4: Demonstration" and "Activity 5: Evaluation" are conducted based on a prototypical implementation and case studies.
The paper is structured as follows.In Section 2, pattern requirements are harvested from literature.The formalism for EIPs timed db-nets is presented in Section 3.An instructive catalog of formalized patterns is provided in Section 4. The foundations for correctness testing are lied in Section 5.In Section 6, we elaborate on the comprehensiveness of the formalism in a quantitative study, show a prototypical db-net realization for testing correctness, and discuss the general applicability of PNs and, in particular, timed db-nets for the composition of integration patterns in a real-world example.We conclude by discussing related work in Section 7 and outlining the main results and future research directions in Section 8.

Formalization requirements analysis
In this section, we collect the EAI requirements relevant for the formalization of the EIPs by analyzing the existing pattern catalogs [2][3][4][5][6] (cf.Q1).Then we briefly discuss which of them can be represented by the means of CPNs or db-nets, and which require further extensions.

Pattern analysis and categories
The EIP formalization requirements are derived by an analysis of the pattern descriptions based on the integration pattern catalogs from 2004 [2] (as original) and recent extensions [3][4][5][6] (as extended) that consider emerging EAI scenarios (e.g., cloud, mobile and internet of things).Together, the catalogs describe 166 integration patterns, of which we consider 139 due to their relevance for this work (e.g., excluding abstract concepts like Canonical Data Model [2] or Messaging System [2]).During the analysis, we manually collected characteristics from the textual pattern descriptions (e.g., data, time) and created new categories, if not existent.
The reoccurring characteristics found in this work allow for a categorization of patterns, as summarized in Fig. 3, to systematically pinpoint relevant EAI requirements into general categories (with more than one pattern).Most of the patterns require (combinations of) Data flow, Control (Crtl.)flow, and (Transacted) Resource ((Tx.)Res.) access.While the control flow denotes the routing of a message from pattern to pattern via channels (i.e., ordered execution), the data flow describes the access of the actual message by patterns (including message content, headers, attachments).Notably, most of the patterns can be classified as control (Crtl.-only;e.g., Wire Tap [2]) and data only (Dataonly; e.g., Splitter [2]) or as their combination (Data-Crtl.; e.g., Message Filter [2]), which stresses on the importance of dataaspects of the routing and transformation patterns.In addition, resources denote data from an external service that are not in the message (e.g., Data Store [3]).The EIP extensions add new categories like combinations of data and {time, resources} (Data-Time like Message Expiration [2,3], Data-Res.like Encryptor [3]) and control and time (Crtl.-Time;e.g., Throttler [3]).For instance, the motivating example in Fig. 2 is classified as Data-Tx.-Res.-Time.The different categories are disjoint with respect to patterns.

From categories to requirements
We assume that the control requirement REQ-0 "Control flow" is inherently covered by any PN approach, and thus it is by CPN and db-net.However, there are two particularities in the routing patterns that we capture in requirement REQ-1 "Msg.channel priority, order": (a) the ordered evaluation of Msg.channel conditions or guards of sibling PN transitions, required for the Content-based Router pattern, (b) the enablement or firing of a PN transition according to a ratio for the realization of a Load Balancer [3].In both cases, neither execution priorities nor ratios are trivially covered by CPNs or db-nets.
Furthermore, there are 77 patterns in the catalogs with data and 10 with message format aspects, which require an expressive CPN token representation (e.g., for encodings, security, complex message protocols), for which we add a second requirement REQ-2 "Data, format" that has to allow for the formal analysis of the data.Although CPNs and db-nets have to be severely restricted (e.g., finite color domains, pre-defined number of elements) for that, db-nets promise a relational representation that can be formally analyzed [10].
Data, format including message protocol with encoding, security The 49 patterns with resources REQ-4 "(external) Resources" require: (a) create, retrieve, update and delete (CRUD) access to external services or resources, and (b) transactional semantics on a pattern level.Similarly, exception semantics are present in 28 patterns as REQ-5 "Exceptions", which require compensations and other post-error actions.Consequently, a PN definition that allows for reasoning over time aspects and structured (persistent) data access is required.

Requirements summary
Table 1 summarizes the formalization requirements for timed db-nets by setting the coverage of the CPN [8] and db-net [10] approaches into context.While CPNs provide a solid foundation for control (cf.REQ-0) and a simple data flow representation (cf.REQ-2), db-nets extend it towards more complex data structures -message protocols in our case (cf.REQ2), and add CRUD operations (cf.REQ-4(a)), transactional semantics (cf.REQ-4(b)), and exception handling (cf.REQ-5), suitable for working with external, transactional resources.In CPNs, message channel distributions cannot be represented and priorities require explicit modeling, leading to complex models.In this work we build upon the CPN approach by subsequently defining timed db-nets in Section 3 for the time-related requirements (cf.REQ-3(a)-(d)) and provide (less complex) realizations for message channel priority execution (cf.REQ-1(a)) and load balancing (cf.REQ-4(b)) in Section 4.

Integration pattern formalization
We recall the main characteristics of db-nets [10], and then extend them with temporal features, obtaining a formalism called timed db-nets.We then show how decidability results on the formal analysis of db-nets can be lifted to timed db-nets (cf.Q2).

The db-net Framework
When facing the problem of formalizing multi-perspective models that suitably account for the dynamics of a system (i.e., the process perspective) and how it interacts with data (i.e., the data perspective), several design choices can be made.In the Petri net tradition, the vast majority of formal models striving for this integration approaches the problem by enriching execution threads (i.e., tokens) with complex data.Notable examples within this tradition are data nets [13] and ν-nets [14], Petri nets with nested terms [15], nested relations [16], and XML documents [17].
While all of the approaches treat data subsidiary to the control-flow dimension, the EIPs require data elements attached to tokens being connected to each other by explicitly represented global data models (cf.Section 2).Consequently, they do not allow for reasoning on persistent, relational data such as tree or graph structured message formats [18].Db-net.The recently proposed framework of db-nets [10] aims at conceptually establishing this connection through a formal model that consists of three layers (cf.Fig. 4).On the one hand, a dbnet separately represents a persistence storage (constituted by a full-fledged relational database with constraints) and a control (captured as a CPN with additional, specific constructs) flow.On the other hand, it explicitly handles their interplay through a data logic intermediate layer, which provides the control layer with queries and database operations (such as trigger, update, read, bind).Updates are transactional, that is, are only committed if the resulting instance of the persistence layer satisfies the database constraints.The control layer is informed about the outcome of an update, and can consequently compensate in case of a roll-back.
We select db-nets (see Definition 2) as a foundation of timed db-nets for three main reasons: (i) ability to represent relational data (cf.REQ-2: "data", "format"); (ii) built-in support for transactional CRUD operations (cf.REQ-4); (iii) exception handling and a corresponding compensation mechanism (cf.REQ-5).In addition, since db-nets are based on CPNs, it is possible to lift existing simulation techniques from CPNs to db-nets [10].
In the remainder of this section, we recall the definition of a db-net and its execution semantics.

Definition 2 ([10]
).A db-net is a tuple ⟨D, P, L, N ⟩, where: • D is a type domain -a finite set of data types, each of the form D = ⟨∆ D , Γ D ⟩, where ∆ D is the value domain of D, and Γ D is a set of domain-specific (rigid) predicates.
• P is a D-typed persistence layer, i.e., a pair ⟨R, E⟩, where R is a D-typed database schema, and E is a finite set of first-order FO(D) constraints over R. 2 • L is a D-typed data logic layer over P, i.e., a pair ⟨Q , A⟩, where Q is a finite set of FO(D) queries over P, and A is a finite set of actions over P. Each action in A is parameterized, and uses its parameters to express a series of insertions and deletions over P.
• N is a D-typed control layer L, i.e., a tuple (P, T , F in , F out , color, query, guard, act), where: -P = P c ∪ P v is a finite set of places partitioned into control places P c and so-called view places P v , -T is a finite set of transitions, -F in is an input flow from P to T -F out and F rb are respectively an output and rollback flow from T to P c color is a color assignment over P (mapping P to a cartesian product of data types), query is a query assignment from P v to Q (mapping the results of Q as tokens of P v ), guard is a transition guard assignment over T (mapping each transition to a formula over its input inscriptions), and act is an action assignment from T to A (mapping some transitions to actions triggering updates over the persistence layer).■ Input and output/roll-back flows contain inscriptions that match the components of colored tokens present in the input and output/roll-back places of a transition.Such inscriptions consist of tuples of (typed) variables, which then can be mentioned in the transition guard as well as in the action assignment (to bind the updates induced by the action to the values chosen to match the inscriptions), and also, in case of the output flow, the inscriptions may contain rigid predicates.Specifically, given a transition t, we denote by InVars(t) the set of variables mentioned in its input flows, by OutVars(t) the set of variables mentioned in its output flows, and by Vars(t) = InVars(t) ∪ OutVars(t) the set of variables occurring in the action assignment of t (if any).Fresh variables FreshVars(t) = OutVars(t)\InVars(t) denote those output variables that do not match any corresponding input variables, and are consequently interpreted as external inputs.While input inscriptions are used to match tokens from the input places to InVars(t), the output expressions that involve rigid predicates operate over OutVars(t).In case of numerical types, these expressions can be used to compare values, or to arithmetically operate over them.We call plain a db-net that employs matching output inscriptions only (i.e., does not use expressions).
Intuitively, each view place is used to expose a portion of the persistence layer in the control layer, so that each token 2 These constraints capture typical database constraints such as key and foreign key dependencies.
represents one of the answers produced by the query attached to the place.Such tokens are not directly consumed, but only read by transitions, so as to match the input inscriptions with query answers.A transition in the control layer may bind its input inscriptions to the parameters of data logic action attached to the transition itself, thus providing a mechanism to trigger a database update upon transition firing (and consequently indirectly change also the content of view places).If the induced update commits correctly, the transition emits tokens through its output arcs, whereas if the update rolls back, the transition emits tokens through its rollback arcs.
The terms message and (db-net, CPN) token will be used synonymously hereinafter.Db-net execution semantics.We briefly recall the execution semantics of db-nets.A state of a db-net captures at once a state of the persistence layer (i.e., an instance of the database), and that of the control layer (i.e., a net marking, where the content of view places must be compatible with that of the database instance).More technically, in each moment (called snapshot) the persistence layer is associated to a database instance I, and the control layer is associated to a marking m aligned with I via query (for what concerns the content of view places).The corresponding snapshot is then simply the pair ⟨I, m⟩.Tokens in m have to carry data compatible with the color of the places and the marking of a view place P v must correspond to the associated queries over the underlying database instance.
Similar to CPNs, the firing of a transition t in a snapshot is defined by a binding that maps the value domains of the different layers, if several properties are guaranteed, e.g., the guard attached to t is satisfied.More specifically, we have the following.[10]).Let B be a db-net ⟨D, P, L, N , ⟩, and t a transition in N .Let σ be a binding for t, i.e., a substitution σ : Vars(t) → ∆ D . 3 A transition t ∈ T is enabled in a B-snapshot ⟨I, m⟩ with binding σ , if:

Definition 3 (Transition Enablement
• For every place p ∈ P, m(p) provides enough tokens matching those required by inscription w = F in (⟨p, t⟩), once w is grounded by σ , i.e., σ (w) ⊆ m(p); • the guard guard(t)σ evaluates to true; • σ is injective over FreshVars(t), thus guaranteeing that fresh variables are assigned to pairwise distinct values of σ , and for every fresh variable v ∈ FreshVars(t), σ (v) ̸ ∈ (Adom type(v) (I) ∪ Adom type(v) (m)). 4■ Firing an enabled transition has the following effects: (i) all matching tokens in control places P c are consumed; (ii) the action instance action -induced by the firing -is applied on the current database instance in an atomic transaction (and rolled back, if not successful); (iii) accordingly, tokens on output places F out or rollback places F rb (i.e., those connected via rollback flow) are produced.Technically, we have the following.Definition 4 (Transition Firing [10]).Let B be a db-net ⟨D, P, L, N ⟩, and s 1 = ⟨I 1 , m 1 ⟩, s 2 = ⟨I 2 , m 2 ⟩ be two B-snapshots.Fix a transition t of N and a binding σ such that t is enabled in s 1 with σ (cf.Definition 3).Let I 3 = apply(action σ (t), I 1 ) be the database instance resulting from the application of the action attached to t on database instance I 1 with binding σ for the action parameters.For a control place p, let w in (p, t) = F in (⟨p, t⟩), and w out (p, t) = F out (⟨p, t⟩) if I 3 is compliant with P, or w out (p, t) = F rb (⟨p, t⟩) otherwise.We say that t fires in s 1 with binding σ producing s 2 , written s 1 [t, σ ⟩s 2 , if: • for each control place p, m 2 corresponds to m 1 with the following changes: σ (w in (p, t)) tokens are removed from p, and σ (w out (p, t)) are added to p.In formulas: All in all, the complete execution semantics of a db-net is captured by a possibly infinite-state transition system where each transition represents the firing of an enabled transition in the control layer of the net with a given binding, and each state is a snapshot.The infinity comes from the presence of external inputs, and the fact that value domains may contain infinitely many elements.It is important to notice that the resulting transition system may be infinite even if the control layer is bounded in the classical Petri net sense.

Example 5.
The aggregator in Fig. 2 requires a view place ch p (denoted by ) for storing and updating the message sequences as well as rollback arc (T 2 , ch in ) to manage compensation tasks (represented as ).The graphical notation is in line with [10].■

Timed db-nets
We now extend the db-net model so as to account for an explicit notion of time.While the implicit temporal support in PNs (i.e., adding places representing the current time) is rather counterintuitive [19], the temporal semantics of adding timestamps to tokens [19], timed places [20], arcs [21] and transitions [22] are well studied and naturally capture different facets of time in dynamic systems.The temporal requirements in REQ-3 demand a quantified, fixed or discrete time representation by timed transitions or places, representing the delay induced by a transition firing.This is currently missing in db-nets.So, in the spectrum of timed extensions to PNs, we extend the db-net control layer N with a temporal semantics that achieves a suitable trade-off: it is expressive enough to capture the requirements in REQ-3, and at the same time it allows us to transfer the existing technical results on the verification of db-nets to the timed extension.
We start by explaining the intuition behind the approach, and then provide the corresponding formalization.We assume that there is a global, continuous notion of time.The firing of a transition is instantaneous, but can only occur in certain moments of time, while it is inhibited in others, even in presence of the required input tokens.Every control token, that is, token assigned to a control place, carries a (local) age, indicating how much time the token is spending in that control place.This means that when a token enters into a place, it is assigned an age of 0. The age then increments as the time flows and the token stays put in the same place.View places continuously access the underlying persistence layer, and consequently their (virtual) tokens do not age.Each transition is assigned to a pair of non-negative (possibly identical) rational numbers, respectively describing the minimum and maximum age that input tokens should have when they are selected for firing the transition.Thus, such numbers identify a relative time window that expresses a delay and a deadline on the possibility of firing.Definition 6.A timed db-net is a tuple ⟨D, P, L, N , τ ⟩ where ⟨D, P, L, N ⟩ is a db-net with the control layer N , and τ : ∪ {∞}) is a timed transition guard mapping each transition t ∈ T to a pair of values τ (t) = ⟨v 1 , v 2 ⟩, such that: (i) v 1 is a non-negative rational number; (ii) v 2 is either a non-negative rational number equal or greater than v 1 , or the special constant ∞.

■
The default choice for τ is to map transitions to the pair ⟨0, ∞⟩, which corresponds to a standard db-net transition.
Given a transition t, we adopt the following graphical conventions: (i) if τ (t) = ⟨0, ∞⟩, then no temporal label is shown for t; (ii) if τ (t) is of the form ⟨v, v⟩, we attach label ''@⟨v⟩'' to t; Example 7. The aggregator in Fig. 2 defines a timed transition T3, that can be fired precisely after 30 time units (here seconds) from the moment when a new sequence seq has been created.Upon firing, T 3 enables the Aggregate transition, by updating the sequence's status on the database to expired using the TimeoutSeq action.■ Timed db-net execution semantics.The execution semantics of timed db-net builds on the one for standard db-nets, extended with additional conditions on the flow of time and the temporal enablement of transitions.The management of bindings, guards, and database updates via actions, is kept unaltered.What changes is that, in a snapshot, each token now comes with a corresponding age, represented as a number in Q ≥0 .
As customary in several temporal extensions of Petri nets, we consider two types of evolution step.The first type deals with time lapses: it indicates that a certain amount of time has elapsed with the net being quiescent, i.e., not firing any transition.This results in incrementing the age of all tokens according to the specified amount of time.
The second type deals with a transition firing, which refines that of db-nets by checking that the chosen binding selects tokens whose corresponding ages are within the delay window attached to the transition.Specifically, let B be a timed db-net ⟨D, P, L, N , τ ⟩, t a transition in N with τ (t) = ⟨v 1 , v 2 ⟩, and σ a binding for t.We say that t is enabled in a given B snapshot with binding σ if it is so according to Definition 3 and, in addition, all the tokens selected by σ have an age that is between v 1 and v 2 .
Firing an enabled transition is identical to the case of standard db-nets (cf.Definition 4), with the only addition that for each produced token, its age is set to 0 (properly reconstructing the fact that it is entering into the corresponding place).
The execution semantics of a timed db-net then follows the standard construction (using the refined notions of enablement and firing), with the addition that each snapshot may be subject to an arbitrary time lapse.This is done by imposing that every B-snapshot ⟨I, m⟩ is connected to every B-snapshot of the form ⟨I ′ , m ′ ⟩ where: • I ′ = I (i.e., the database instances are identical); • m ′ is identical to m except for the ages of tokens, which all get incremented by the same, fixed amount x ∈ Q of time.
Given two B-snapshots s and s ′ , we say that s directly leads to s ′ , written s → s ′ , if there exists a direct transition from s to s ′ in the transition system that captures the execution semantics of B. This means that s ′ results from s because of a transition firing or a certain time lapse.We extend this notion to finite execution traces s 0 → . . .→ s n .We also write s * − → s ′ if s directly or indirectly leads to s ′ .If this is the case, we say that s ′ reachable from s.
Example 8. To complete the aggregator, when the persisted sequence in the aggregator is complete or the sequence times out, then the enabled Aggregate transition fires by reading the sequence number seq and snapshot of the sequence messages, and moving an aggregate msg ′ to ch out .Notably, the Aggregate transition is invariant to which of the two causes led to the completion of the sequence.■

Checking reachability over timed db-nets
Checking fundamental correctness properties such as safety/ reachability is of particular importance for timed db-nets, in the light of the subsequent discussion in Section 6.2 on reachable goal states.We consider here, in particular, the following relevant reach-template problem: Input: (i) a timed db-net B with set P c of control places, (ii) an initial B-snapshot s 0 , (iii) a set P empty ⊆ P c of empty control places, (iv) a set P filled ⊆ P c of nonempty control places such that P empty ∩ P filled = ∅.
Output: yes if and only if there exists a finite sequence of Bsnapshots of the form s 0 → . . .→ s n = ⟨I n , m n ⟩ such that for every place p e ∈ P empty , we have |m n (p e )| = 0, and for every place Checking the emptiness of places in the target snapshot is especially relevant in the presence of timed transitions, so as to predicate over runs of the systems were tokens are consumed within the corresponding temporal guards.For example, by considering transition T3 in Fig. 2, asking for the ch timer place to be empty guarantees that T3 indeed triggered whenever enabled.
Since timed db-nets build on db-nets, reachability is highly undecidable, even for nets that do not employ timed transitions, have empty data logic and persistence layers, and only employ simple string colors.As pointed out in [10], this setting is in fact already expressive enough to capture ν-nets [13,14], for which reachability is undecidable.Similar undecidability results can be obtained by restricting even more the control layer, but allowing for the insertion and deletion of arbitrarily many tuples in the underlying persistence layer.
However, when controlling the size of information maintained by the control and persistence layers in each single snapshot, reachability and also more sophisticated forms of temporal model checking become decidable for db-nets using string and real data types (without arithmetics).
In particular, decidability has been shown for bounded, plain db-nets.Technically, a db-net B with initial snapshot s 0 is: We say that a db-net is bounded, if it is at once width-, depth-, and state-bounded.Intuitively, a db-net is bounded if it does not accumulate unboundedly many tokens in a place, and guarantees that the number of data objects used in each database instance does not exceed a pre-defined bound.The decidability of reachability for bounded db-nets does not imply decidability of reachability for bounded timed db-nets.In fact, ages in timed db-nets are subject to comparison and (global) increment operations that are not expressible in db-nets.However, we can prove decidability by resorting to a separation argument: the two dimensions of infinity respectively related to the infinity of the data domains and of the flow of time can in fact be tamed orthogonally to each other.In particular, we get the following.
Theorem 1.The reach-template problem is decidable for bounded and plain timed db-nets with initial snapshot.
Proof sketch.Consider a bounded timed db-net B with initial snapshot s 0 , empty control places P empty , and filled control places P filled .Using the faithful data abstraction techniques presented in [10, Thm 2], one obtains a corresponding timed db-net B ′ enjoying two key properties.First, B ′ is bisimilar to B, with a data-aware notion of bisimulation that takes into account both the dynamics induced by the net, as well as the correspondence between data elements.Such a notion of bisimulation captures reachability as defined above, and consequently reachtemplate(B, s 0 , P empty , P filled ) returns yes if and only if reachtemplate(B ′ , s 0 , P empty , P filled ) does so.Second, the only source of infinity, when characterizing the execution semantics of B ′ , comes from the temporal aspects, and in particular the unboundedness of token ages.This means that B ′ can be considered as a "standard" temporal variant of a CPN with bounded colors that, in turn, boils down to a temporal variant of an (uncolored) P/T net.In particular, one can easily see that B ′ corresponds to a specific type of bounded Timed-Arc Petri Net (TAPN) [21], a classical P/T net with continuous time, where the tokens carry an age and arcs between places and transitions are labeled with time intervals that restrict the age of tokens: • whenever B ′ contains a transition t with τ (t) = ⟨v 1 , v 2 ⟩, its corresponding TAPN labels each arc entering in t with the Consequently, the infinity of B ′ can be tamed using standard techniques known for bounded TAPNs, which indeed enjoy decidability of reachability for the queries tackled by reach-template [23,24].In particular, notice that reach-template does not explicitly express constraints on the expected token ages when reaching the final state.□ It is interesting to notice that TAPNs have a more expressive mechanism to specify temporal guards in the net.In fact, TAPNs attach temporal guards to arcs, not transitions, and can therefore express different age requirements for different places, as well as produce tokens with an age nondeterministically picked from a specified interval.Hence, this more refined temporal semantics can be seamlessly introduced in our timed db-net model without compromising Theorem 1.

Formal pattern realizations
In this section we discuss (formal) pattern realizations using timed db-nets.Due to the high number of patterns, the formalization and corresponding in-detail description of all of them seems impractical.However, thanks to the fact that patterns can be classified into disjoint categories (see the requirement categories in Section 2), it suffices to discuss the most representative ones from each of such categories.We call this an instructive pattern formalization, which strives to formalize the patterns and, at the same time, offers modeling guidelines for other patterns of the respective categories using the provided examples.The structure of subsequent sections is as follows: first we give a brief description of the pattern and its aim in the context of the requirements, then we discuss its relevance as candidate, and finally specify a realization.

Control flow: Load balancer
Control flow only patterns are those that route the token flow without looking inside the actual content of the message.Candidate Selection.To demonstrate the control flow only pattern (cf.requirement REQ-0 in Table 1), we have chosen the Load Balancer pattern [3].Interestingly, this pattern also covers the message channel distribution requirement (cf.requirement REQ-1(b) in Table 1) and thus can be considered as a relevant candidate of this category as well.Candidate Description.In a nutshell, the balancer distributes the incoming messages to a number of receivers based on a criterion that uses some probability distribution or ratio defined on the sent messages.Pattern Realization.To realize the probability-or ratio-based criterion in Petri nets, one could adopt the stochastic Petri nets [25,26] or extend the db-net transition guards definition with an ability to sample probability values from a probability distribution (e.g., [27]).While the latter would extend the db-net persistence layer, it is unclear whether the decidability results discussed in the previous section will still hold.Hence, we opted for the ratio criterion that, as shown in Fig. 5, is realized using a persistence storage and transition guards with a simple balancing scheme.Specifically, a message msg in channel ch in leads to a lookup of the current ratio by accessing the current message counts per output channel in the database (via the view place message count) and evaluating guards assigned to one of the two transitions based on the extracted values.The ratio criterion is set up with two (generic) guards ϕ 1 (toCh 1 , toCh 2 ) and ϕ 2 (toCh 1 , toCh 2 ) respectively assigned to T 1 and T 2 .If one of the guards holds, the corresponding transition fires by moving the message to its output place as well as updating the table by incrementing the corresponding channel count.The latter is done by executing action Inc_Ch i (toCh i ) that consecutively performs Inc_Ch i •del = {CountCh i (toCh i )} and Inc_Ch i •add = {NumberCh i (toCh i + 1)} (for i ∈ {1, 2}).5

Data flow: Message translator, splitter
Data flow only patterns are those that access the actual content of the message mostly for data filtering and transformation purposes.Candidate Selection.The stateless Message Translator [2] is the canonical example of a data flow only pattern (cf.requirement REQ-2 in Table 1).
The (iterative) Splitter [2] is an example of a non-message transformation data flow only pattern, which is also required for a case study scenario in Section 6. Candidate Description.The translator works on the data representation level (i.e., data flow) and transforms incoming messages of type TYPE 1 into (output) messages of type TYPE 2 .The translation mechanism is defined a-priori by the user.
The splitter represents a complex routing mechanism that, given an iterable collection of input messages it together with two objects pre and post, is able to construct for each of its elements a new message of the form [⟨pre⟩]⟨it : with optional pre and post parts.Pattern Realizations.A timed db-net representing a message translator is shown in Fig. 6.As in the pattern's definition, the corresponding net performs the message transformation from one type to another.Specifically, an incoming message of type TYPE 1 from input channel place ch in is consumed (after firing start_translate) by subnet translation that, in turn, produces (by firing end_translate) a new message of type TYPE 2 into the receiver place ch out .
As for the Splitter pattern, we show in Fig. 7 that, under certain restrictions assumed for the type of the iterable collection at hand, the pattern can be fully realized using only colored Petri nets.The entering message payloads in ch 0 are separated into its parts: pre, post and it.While the first two are remembered during the processing, the iterable it is iteratively split into parts according to some criterion realized in the split subnet, which represents a custom split logic and thus is intentionally left unspecified (in Fig. 7 it is marked with a cloud symbol).Example 9.The split subnet can be adapted to the message format and the requirements of a specific scenario.Fig. 8 demonstrates a possible implementation of the subnet.Here, functions get and drop are used to read and remove the n-th element of an iterable object.In our case, we alternate their applications to the iterable object it from place ch 1 in order to extract and delete its first element that is then placed into ch 2 .Such a procedure is repeated until it is empty (i.e., it is NULL).■ Each of extracted elements from it, together with the information about pre and post, is then used to create a new message (by calling function genMsg) that is passed to the output channel ch out (Fig. 8).

Data and control flow: Content-based router
Data and control flow patterns consider the actual content of the message for routing a token through the net.Candidate Selection.The Content-based Router pattern [2] is the canonical candidate for data and control flow patterns and also one of the mostly used integration patterns (cf.[28]).Candidate Description.The Content-Based Router examines the message content and routes the message onto a different channel based on data contained in the message.The routing conditions have to be executed in a pre-defined order (cf.requirement REQ-1(a) in Table 1).Pattern Realization.The realization of the Content-based Router pattern with conditions ϕ 1 , ϕ 2 is shown in Fig. 9. Given that the router can be realized using various strategies imposed on the conditions (for example, using priority function similarly to [25]), we opted for more explicit realization where conditions have to be evaluated strictly in-order (cf.requirement REQ-1(a) in Table 1), using pair-wise negated db-net transition guards.In our pattern realization, a message from input channel place ch in is first evaluated against condition ϕ 1 .Based on the evaluation result, the message is moved either to ch 1 or ch 2 .In case it has been moved to ch 2 , the net proceeds with the subsequent evaluation of other conditions using the same pattern.When none of the guards can be evaluated, a non-guarded, low-priority default transition fires (not shown).The provided realization covers the router's semantics, however, requires (k × 2) + 1 transitions (i.e., condition, negation, and only one default), with the number of conditions k.

Data flow with transacted resources: Content enricher, resequencer
Data flow with transacted resource patterns consider the actual content of the message as well as additional information from transacted sources mostly for message transformations.Candidate Selection.The first pattern chosen for this category, such that it includes a data flow with transacted resources, is the Content Enricher [2] (cf.requirements REQ-2, REQ-4 in Table 1), which mainly reads from the transacted resource.Hence we add the stateful Resequencer [2], which writes to a transacted resource.Candidate Description.The content enricher requires accessing external resources (e.g., relational database) based on data from an incoming message.Such data are then used to enrich the content of the message.
The stateful Resequencer is a pattern that ensures a certain order imposed on messages in (asynchronous) communication by persistently storing intermediate message sequences (cf.requirements REQ-2, REQ-4 in Table 1).Pattern Realizations.A Content Enricher db-net realization is shown in Fig. 10.The message enriching processes starts with consuming a message from the input channel place ch in .The message is represented by two elements: a message identifier key and message content msg.We use request-reply transitions T 1 and T 2 to direct the net flow towards extracting message-relevant data from an external resource.The extraction is performed by matching the message identifier key with the one in the storage.While the stateless enriching part is essentially a colored Petri net, we need db-nets in order to access a stateful resource in ch 3 one needs to specify and perform queries on the external storage (cf.REQ-4(a,b)).In addition to the specific pattern requirements, the message processing semantics of the EIPs describes one message (or token) at a time.Thus we assume that the represented net model always deals with a single message as well.
Fig. 11 shows how the resequencer can be represented in dbnets.We assume that incoming message msg comes with the information about sequence seq it belongs to and certain order ord, and all such data are eventually persisted in the database.
The information about stored messages can be accessed through the view place ch p .For the first incoming message in a sequence (i.e., the guard of T 1 evaluates to true), a corresponding sequence entry with a unique identifier value bound to sid6 will be created in the persistent storage (sequences can be accessed in the view place ms) using action CreateSeq, whereas for all subsequent messages of the same sequence (i.e., when the guard of T 2 holds), the messages are simply stored in the database via updating action UpdateSeq.As soon as the sequence is complete, i.e., all messages of that sequence have arrived, the messages of this sequence are queried from the database in ascending order of their ord component (see the view place ch p ′ and its corresponding query) using transition Fetch.The query result is represented as a list that is forwarded to ch out .Note that, similarly to the aggregator in Section 6, the completion condition can be extended by a custom logic, indicated by a subnet connected to T 3 .

Control flow with transacted resource and time: Circuit breaker
Control flow with transacted resource and time patterns require a transacted resource to route a message to their receivers, while involving time aspects.Candidate Selection.To demonstrate a family of patterns that are based on a control flow with transacted resources and time, we selected as its representative the Circuit Breaker pattern [3] (cf. requirements REQ-0, REQ-3, REQ-4(a)).Candidate Description.The Circuit Breaker addresses failing or hung up remote communication, which impacts the control flow of the request-reply pattern [2] by using transacted access to external resources.Thereby the request-reply pattern is just one out of many control flow only pattern examples, for which the circuit breaker can be used.Pattern Realization.Fig. 12 shows a representation of the request-reply pattern in timed db-nets, extended by a circuit breaker "wrapper" that protects the remote call.At the beginning, every (endpoint-dedicated) circuit 7 in the circuit breaker is closed (that is, its status in table Circuit is initially set to closed), thus allowing to pass the input message from input place ch in to the Request-Reply pattern part (represented with a dedicated subnet) by firing Send_Req.If the request-reply pattern executes normally (i.e., transition Receive_Resp has been executed), the resulting message is placed in ch out .Otherwise, in case when an exception has been raised, the information about the failed endpoint is first placed in place ep exec , and then, by firing T 1 and action UpdCount assigned to it, can be stored both in a special place ch exec and the Endpoints table of the persistent storage. 8Such a table contains all endpoints together with the number of failures that happened at them.If the number of failures reaches a certain limit (e.g., num > 5), the circuit trips using transition T 2 and updates its status in the corresponding entry of the Circuit relation to open using action TripCircuit.
This in turn immediately blocks the communication process that, however, can be resumed (i.e., the circuit is again set to open and the failure count is set to 0) after 40 time units have been passed and transition T 3 can be fired.Note that whenever at least one circuit remains opened, the messages from ch in will be immediately redirected to ch_exec.

Control flow with time: Throttler, delayer
Control flow with time patterns are those that route tokens depending on the time.Candidate Selection.The message expiration (cf.REQ-3(b)) can be modeled implicitly in the timed db-net tokens.The patterns that represent most natural candidates for requirements of delaying the messaging or an operation (cf.requirement REQ-3(c)) and processing only a certain number of messages per time (cf.requirement REQ-3(d)) are the Delayer [3] and Throttler [3] patterns.Candidate Description.The first pattern is the Throttler.It helps to ensure that a specific receiver does not get overloaded by regulating the number of transferred messages.A slightly different pattern of this category is the Delayer that uses a timer to reduce the frequency of messages sent to the receiving channel.Pattern Realizations.The representative patterns of this group mostly require control flow and time aspects, and thus can be represented using timed colored Petri nets (e.g., [19]).
Fig. 13 shows the realization of a throttler that emits at most n messages per 60 time units to the receiving place ch out .To ensure that the number of messages taken from input channel place ch in does not exceed the predefined bound, we introduce place cap that has n ''simple'', black tokens assigned to it with an initial marking.Given that every token's age in ch in is initially set to zero, one can fire T 1 until the time elapses to the point that the timed guard of T 2 is satisfied.Note that every time T 1 is fired, message token msg is getting appended to collection of messages msgs using special function add. 9  A slightly different pattern of this category is the Delayer that uses a timer to reduce the frequency of messages sent to the receiving place ch out .As shown in Fig. 14, the sent message from the input channel place ch in is first placed into intermediate place ch 1 , and then gets delayed by 60 time units (so as to make the timed guard of T 2 satisfiable).
9 For example, in CPN Tools one can define the add function on lists.

Data flow with transacted resources time: Aggregator
The combination of data, transacted resources and time aspects in patterns makes them the semantically most complex ones.Candidate Selection.One of the mostly used patterns in this category (cf.[28]) is our leading pattern example, the Aggregator.To be fully functional it requires access to the data (cf.requirement REQ-2), the configuration of a timeout (cf.requirement REQ-3(a)), CRUD operations on transacted external resources (cf.requirement REQ-4), and compensation (cf.requirement REQ-5).Candidate Description.In a nutshell the aggregator is a stateful filter that combines several incoming message into one outgoing message.Since the aggregator is our running example throughout this work, more elaborate descriptions can be found in Examples 1, 5, 7 and 8. Pattern Realization.Fig. 2 specifies the semantics of a commonly used stateful Aggregator [2] pattern.The aggregator persistently collects messages, that can be seen in a dedicated view place ch p , and aggregates them using the Aggregate transition based on a completion condition (i.e., a sequence that the message is related to is complete) or on a sequence completion timeout.For this an incoming message msg is correlated to an existing sequence based on a sequence label seq attached to it.If the message is first in a sequence (i.e., the guard of T 1 is satisfied), a new sequence is created in the Sequences table and the message together with a reference to this sequence is recorded in the Messages table in the persistent storage using action CreateSeq.If a message correlates to an existing sequence seq (i.e., the guard of T 2 is satisfied), which has been aggregated due to a timeout (i.e., T 3 has fired and updated the sequence by assigning to it the expired status in the Sequences table), the update fails.This results in the roll-back behavior: the database instance is restored to its previous state, while the net uses the roll-back arc to put the message back to the message channel ch in .This message can be then added as the first one to another newly created sequence  seq.To aggregate the messages one needs to fire Aggregate that, in turn, can be fired if the sequence completion criterion has been met and transition T 4 has fired.Note that the completion criterion is user-definable and thus we leave out its concrete implementation in this pattern realization.

Discussion
The db-net foundation implicitly covers REQs-2,4 in form of a relational formalization with database transactions.Together with the realizations of the Content-based Router, Load Balancer (cf.REQ-1(a), (b)) and Aggregator Fig. 2 (cf.REQs-3(a), REQ-4 and REQ-5) we showed realizations for all of the requirements from Section 2. The expiry of tokens, depending on time information within the message, can be represented using CPNs and db-nets by modeling it as part of the token's color set and transition guards (similar to [19]).Nevertheless, to model the transition timeouts (cf.REQ-3(a)) and delays (cf.REQ-3(c)) one needs to resort to more refined functionality realized in timed transitions provided by the timed extension of db-nets.Similarly, the msg/time ratio (cf.REQ-3(d)) can be represented (see Throttler pattern in Fig. 13).
The categorization of patterns according to their characteristics allows for an instructive formalization based on candidates of these categories and shows that even complex patterns can be defined in timed db-nets.This, in turn, allowed us not to discuss candidates of all the categories from Fig. 3, since they can be seamlessly derived by the introduced patterns from other categories.For example, control and data with resource patterns do not require transacted resources, and can thus be realized similar to their transacted resource cases by substituting view places with normal ones.The building blocks for the realization of transacted resource as well as data flow with time patterns can be derived from, e.g., the Resequencer or Aggregator patterns.Finally, the data flow with format patterns can be represented using CPNs, and thus not further discussed here.
Thanks to our model checking results presented in the previous section, the correctness of the realization of each pattern can be formally verified.However, due to the absence of a model checker for (timed) db-nets, the formal analysis (cf.[10]) of such cannot be automatically performed.Nevertheless, as an alternative to the model checking approach, it is possible to perform the correctness testing using the experimental validation via (repeated) simulation of db-net models.We discuss this approach in the following section.

Correctness testing
The correctness of an integration pattern realization represented in timed db-nets can be validated by evaluating the execution traces of such models (e.g., similar to the state-oriented testing scheme by Zu and He [29]), where at each step, an execution trace contains a B-snapshot representing a current state of the persistence layer together with a control layer marking.According to the timed db-net execution semantics (see Section 3), a consecutive, finite enactment of a pattern model starting from an initial B-snapshot s 1 = ⟨I 1 , m 1 ⟩ produces several B-snapshots s = ⟨I, m⟩ that, depending on the number of enactment steps, generates a finite execution trace s 1 → . . .→ s n+1 for some n ∈ N. Since the router has no persistence, the database snapshot is empty.Fig. 15 shows a possible, finite execution of B starting from s 1 .In order to reach s 2 from s 1 , transition T 1 with routing condition [age ≤ 25] has to be fired.The resulting marking only contains the message of Jane that has satisfied the condition, while the message of Paul has not been forwarded.■ In the router example, given the initial marking {(Jane, 23), (Paul, 65)} analyzed against the guard of T 1 , the marking s expected in Fig. 15 denotes the only allowed final (or expected) state that can be reached from s 1 in case of a properly working router.Indeed, it is evident that the generated state s 2 is identical to the expected one and thus one could conjecture the correctness of the presented execution.More generally this is defined in Definition 11, which, together with a comparison operator ∼, allow for a configurable, correctness criterion definition over finite traces induced by pattern models.

Definition 11 (Correctness Criterion).
Let s 1 → . . .→ s n+1 be a finite execution trace of some pattern model M and C be a set of reference B-snapshots that define a set of correct, desired states.We say that a pattern execution is correct, if for all c ∈ C it holds that c ∼ s i , for i ∈ {1, . . ., n + 1}.The operator ∼ typically denotes equality, but can also correspond to more sophisticated comparison operators for relating reached and desired snapshots.■ For example, it is easy to see, that in Fig. 15 s Note that the definition still captures the situation where target snapshots are enumerated explicitly.Other forms of validation (e.g., based on statistical goals formulated over the exhibited behaviors of the system) would require a more fine-grained approach able to aggregate snapshots and traces.This is matter of future work.
Next we discuss the application of this correctness criterion for three different requirement categories: control flow, data flow together with format and (transacted) resource, and timed patterns.

Control flow patterns
To test control flow patterns for correctness, the operator ∼ can be defined so as to compare the number of tokens in the correct, final snapshot.Nevertheless, there are control flow patterns whose correctness testing puts additional requirements on ∼.For example, the Load Balancer pattern (cf.REQ-1) denotes a special case, since it requires a sequence of input tokens, which then have to produce data entries in the output instances that fit the probability values and distribution of the balancer (e.g., Kolmogorov-Smirnov test [30]).Therefore, the ∼ operator has to check whether the number of tokens in the desirable states follows a probability distribution.Assuming that in the current state ϕ 1 holds, we can fire transition T 1 .This, in turn, generates a new state s i+1 .In order to check whether s i+1 is an expected state, we run a correctness test that is performed on the number of messages sent to the channels.Such a test allows us to see whether a final, desired message ratio is produced by the model.For example, knowing that the bandwidth of the second channel is considerably greater than the one of the first channel, we may expect that the final ratio of ch 1 ch 2 is not greater than 0.7.Our ∼ operator can be accordingly adopted to perform such a test.■ The example shows that, even though the correctness testing of control flow patterns is feasible, there are cases in which such tasks may require extra workload in form of input data, mainly on the configuration of the testing setup.

Data flow and (transacted) resource patterns
In order to test the correctness of patterns that meet requirements REQ-2 and REQ-4 (cf.Table 1), one needs to consider Fig. 17.A partial execution of a content enricher timed db-net.For simplicity we omit the view place marking in s i+1 and s expected .
testing not only the marking, as it is done in the case of control flow patterns, but also to compare states of the persistent storage.Specifically, for a given initial snapshot s 1 with an instance I 1 , either an expected state s n with an instance I n or an expected error state s j must be produced by the pattern.Otherwise the pattern is considered incorrect.
Example 13.Let us consider a timed db-net B for the Content Enricher in Fig. 10 with an initial B-snapshot s 1 that contains a marking m with a message composed of an employee name and age such that m(ch in ) = (Jane, 23).In Fig. 17 the database of the content enricher model stores information about employees and their positions in a relation called Empl.To reach the final state s i+1 from s i , we need to fire T 2 .The fact that we have s i with a marking containing two tokens (one with the employee's name and age, and another one with the same name and position) shows that, a few steps before, the employee token (Jane, 23) was matched to the corresponding entry in the persistent storage and extra information about her position was extracted.If it was not the case, the execution trace, which is partially represented in Fig. 17, would not contain such two tokens that, in turn, would mean that the transition used for accessing the view-place could not fire since no matches were found.Thus, T 2 does not fire and the final state does not fulfill the correctness criterion for the given initial snapshot.■ Note that, however, in this example the internal database state does not play the main role when testing the correctness.The correctness checking is done on the markings which are populated from the database based on the matching condition assigned to the transition inspecting the view place.

Timed patterns
Finally, a timed pattern can be validated by extending database schemas with extra attributes for storing timestamps (as "oninsert timestamps" in actual databases) or by adding such timestamps to tokens, indicating the token creation time.This allows for checking delays, e.g., by comparing the insert timestamps time(I 1 ), time(I n ) of data to instance I 1 and those of the final instance I n , or the timestamps in the tokens, respectively.With this, a numeric delay interval d = (d 1 , d 2 ] can be checked, with d 1 = τ being the delay configured in the pattern and d 2 = τ + avg(t p ) + var(t p ) being a sum of τ , the average time t p and the variance the pattern requires for the internal transition firings without the configured delay.Since the delay τ is an interval itself, its upper value is taken for the application of the correctness criterion.
Example 14.Consider a timed db-net B for the Delayer in Fig. 14 with an initial B-snapshot s 1 that has a marking m with a message composed of an employee's name and age m(ch in ) = (Jane, 23).In Fig. 18 transition T 2 fires with a time delay of 60 time units.
Since the delayer does not require a database state, the correctness of the timestamps is checked on the markings.In this example, we assume that the average time avg(t delayer ) is 10.0 and the variance var(t delayer ) is 5.0 without the delay.This results in a desirable marking (Jane, 23, t) in s expected with t ∈ (60, 75] that, in turn, can be checked against the one in s 3 using ∼ that, on top of comparing the states by equality, also compares whether the time stamp belongs to a desired time interval.■ Note that the token ages cannot be used for checking delays, since they are reset, when inserted into a timed db-net place.

Erroneous patterns
The main sources of errors during the responsible pattern formalization process in Fig. 1 are the conceptual work on defining the formal representation of a pattern, as well as the model to implementation step, in which the formal model is implemented and configured.Subsequently, we briefly describe these types of errors by example.Pattern Description to Model Errors.The formal representation of a pattern depends on different challenging factors concerning the quality and comprehensiveness of the pattern description as well as the clarity of its variations, and the complexity of the formalism.Consequently, the process of formalizing a pattern can introduce flaws due to the wrong understanding of the complex task at hand.It is important to note that in this work timed db-nets are used to formalize various enterprise integration patterns and integration scenarios involving them.This, in turn, facilitates the usage of model-driven development of EIP realizations (e.g., Apache Camel [31]) as well as their combinations (most likely while running a Petri net-based engine in the background of some modeling suite that uses more conventional graphical EIP language like the EIP icon notation [2]) since the generated interfaces are having clear syntax and semantics, and thus are less prone to errors caused by users at design time.On top of that, designed application integration infrastructures can be tested against various correctness guarantees using conventional techniques such as validation through simulation.In particular, while testing software systems can be considered more programbased testing (e.g., [32,33]), the validation of Petri nets allows for specification-based testing, when developing concurrent systems like integration solutions.This enables more elaborate testing methods like structural [34], place-or transition-oriented [35][36][37], or state-oriented testing [29,36], as used in this work.Those techniques go beyond the classical program-based testing used for software systems by providing deeper insights, e.g., into the actual concurrency graphs for structural testing [34] or the states of the transition system representing the software at hand [29,36].Note also that the validation approach used in this paper is similar to (Big Bang) Integration Testing [38] that aims at checking correctness of interfaces between various software components against a (correct) software design.Yet, when modeled with timed db-nets, the whole testing process is facilitated with the graphical, design-time representation of the integration scenario "topology", and thus allows for faster and more precise detection of test cases.
Given that on top of the formalism of timed db-nets one can enforce certain composition rules for designing integration scenarios for EIPs, it would be even possible to reduce the effort for responsible development of integration scenarios in which manual errors are minimized.In other words, by restricting the modeling language and giving it a blocked structure (i.e., integration scenarios can be represented only as certain compositions of EIPs), one can facilitate user design

Evaluation
In this section, we quantitatively evaluate comprehensiveness of the timed db-net formalism real-world integration scenarios (including pattern composition cases), show the correctness of the formal pattern realizations for the requirements discussed in Section 2.2 via the simulation, and qualitatively study the application of timed db-nets to one hybrid integration (i.e., "on-premise to cloud" (OP2C)) and one internet of things integration scenario (i.e., "device to cloud" (D2C)) (cf.Q3).

Comprehensiveness of timed db-nets
The comprehensiveness of timed db-nets is evaluated with respect to coverage of the patterns in the catalogs depicted in Fig. 20(a).Here we compare the applicability of the existing CPN-based formalization [8] (Current-CPN), colored Petri nets in general (CPN (general)) and timed db-nets (timed db-net).While the formalization proposed in [8] covers only some of the EIPs from [2], many more EIPs as well as the recently extended patterns can be represented by colored Petri nets.Now, as we have indicated in the previous sections, one can formalize nearly all of the EIPs using timed db-nets.The only exception is one pattern, namely Dynamic Router, whose requirements cannot be represented using Petri net classes discussed in this work.In fact, in order to represent such a pattern one would need to employ a formalism that, on the one hand, subsumes db-nets and thus covers all the requirements discussed in Table 1 and, on the other hand, supports extra requirements (i.e., dynamically added or removed channels during runtime [8]) that, in turn, extend the expressiveness of the formalism with the ability to generate arbitrary topologies.To allow for such a functionality one may opt for an approach similar to the one in [39], where the authors enrich classical Petri nets with tokens carrying Linear Logic formulas.This, however, would require further investigations.
After having analyzed the pattern coverage per formalism, we now consider the relevance of such formalisms against real-world integration scenarios.For this we implemented a Content Monitor pattern [3], which allows for the analysis of the actually deployed integration scenarios that are, for example, running on SAP Cloud Platform Integration (SAP CPI) 10 .Fig. 20(b) shows the coverage of the formalisms grouped by the following integration scenario domains, taken from [3]: On-Premise to Cloud (short OP2C, also known as hybrid integration), Cloud to Cloud or Business Network (native cloud applications C2C, B2B), and Device to Cloud (D2C, including Mobile, IoT and Personal Computing) integration.The results show that the current approach by Fahland and Gierds [8] is only partially sufficient to cover the OP2C, C2C and B2B scenarios.With a more general CPN approach, more than 70% of more conventional OP2C communication patterns can be covered.The more recent and complex cloud, business network and device integration requires timed db-nets to a larger extent, which covers all analyzed scenarios.Note that the Dynamic Router with arbitrary topologies was not practically required for these scenarios, and thus seems to be rather of theoretical relevance.Conclusions.(1) timed db-nets are sufficient to represent most of the EIPs; (2) EIPs that are generating arbitrary topologies are not covered by considered PN classes; (3) hybrid integration requires less complex semantics and thus is largely in CPN; (4) timed db-nets cover all of the current integration scenarios in SAP CPI.

Simulation: Pattern correctness testing
We prototypically implemented the db-net formalism so as to experimentally test the correctness of the pattern realizations via simulation, following the idea described in Section 5.In order to test the correctness, we simply generate a finite execution trace, starting in an initial B-snapshot s 1 and finishing in s n , using the prototype and inspect the generated marking together with the database instance.If s n corresponds to an expected state according to Definition 11, then the test is considered to be successful.Since the inner workings of a pattern can differ between various pattern implementations (e.g., the implementation generates some intermediate states, which are not related to the actual pattern model, but are used, for example, for collecting statistics), the correctness can be also checked at any step of such pattern's finite execution trace.
Prototype.In this work we have chosen CPN Tools v4.0 [11] (CPN Tools, visited 03/2019: ) for the modeling and simulation.As compared to tools like Renew v2.5 (Renew, visited 03/2019: http://www.renew.de/),CPN tools supports third-party extensions that can address the persistence and data logic layers of db-nets.Moreover, CPN Tools handles sophisticated simulation tasks over models that use the deployed extensions.To support db-nets, our extension11 adds support for defining view places together with corresponding SQL queries as well as actions, and realizes the full execution semantics of db-nets using Java and a PostgreSQL database.Simulation.We illustrate the correctness for the majority of the formalized patterns from Section 4 using the simulation in our CPN Tool extension.We focus on the following case studies: Message Translator, Splitter, Content Enricher and Aggregator.In addition, we discuss the case of a flawed example of the Contentbased Router pattern from Example 15.Together, these patterns denote the most frequently used patterns in practice according to [28] and cover patterns from five out of seven categories discussed in Section 4 (excluding "control flow only" and "control flow with transacted resources").Message Translator, Splitter.The realization of a variant of the message translator from Fig. 6 is shown in Fig. 21.Here, as input,  the pattern receives a delimiter-separated string and translates it into a list of strings using a special function StrToList defined in CPN Tools.The final marking of the net shows the expected state s expected = ⟨I exp , m exp ⟩ in which the database instance is empty (thus not shown) and the net is having only CH2 marked such that m exp (CH2) = {(''msg1''), (''msg2''), (''msg3'')}.
The Splitter from Fig. 7 is implemented as shown in Fig. 22.In this model, we have two input messages (['' M1'', ''M2'', ''M3''], A,  B) and ([''M4'', ''M5'', ''M6''], A, B) consisting of iterable objects of size three each as well as pre and post data values A and B. These two messages are then split into six single objects of a shape (A, ''Mi'', B), for i ∈ {1, . . ., 6}.The partial execution of the Splitter in Fig. 22 demonstrates the second message to be already split (see the three output messages in place CH3), whereas the first message is ready to be split (i.e., the split transition is enabled 12 ).Note that the current marking of the net can be already intermediately tested against the expected state s expected = ⟨I exp , m exp ⟩ in which the database instance is empty and the marking is having only CH3 marked such that m exp (CH3) = {(A, ''M1'', B), (A, ''M2'', B), (A, ''M3'', B), 12 Graphically, enabled transitions are highlighted by a green frame, indicating that they are ready to fire.
Indeed, it is easy to see that m(CH3) ∼ m exp \ {(A, ''M1'', B), (A, ''M2'', B), (A, ''M3'', B)}, indicating that elements of the second message have been correctly processed, by duly adding pre and post data values.The correctness of the splitter implementation, as it is defined in Definition 11, naturally follows.Content Enricher.The Content Enricher from Fig. 10 can be realized as shown in Fig. 23.
The demonstrated net has three messages (namely, (13, ''msg1''), (17, ''msg2'') and (2, ''msg3'')) in its initial marking and in its current state has already enriched message msg2 by adding to a corresponding token an extra data value ''appid-17'' from the storage (see place ch4), that is accessed through the view place called db.The data in db is stored in a shape of key-value pairs which are then matched with messages by their keys (that is, first components of the pairs).One can see that the net is ready to enrich msg1: the enrich msg transition is already enabled and the data from the storage that match the key of the token carrying msg1 had been fetched from db and placed in ch3.While the type of data used in different applications may require to reconfigure the query on the storage as well as to use a different enrichment function, the topology of the net representing the enricher remains the same.To test the correctness, we assume an expected state with a partial marking only.Specifically, we are interested in m exp (ch4) = {(13, ''msg1'', ''appid-13''), (17, ''msg2'', ''appid-17''), (2, ''msg3'', ''appid-2'')}.
Aggregator.The Aggregator pattern in Fig. 2 can be realized using our CPN Tool extension as it is shown in Fig. 24.
Here we neglect the timed completion condition due to the differing temporal semantics in the tool.For the ease of simulation, we added a table Test_Messages containing four test messages (1, 1, ''text-1''), (2, 2, ''text-2''), (3, 1, ''text-3''), (4, 2, ''text-4''), with ids from {1, . . ., 4}, two sequences {1, 2} and a textual payload.The completion condition is configured to aggregate after two messages of the same sequence and the aggregation function concatenates the message payloads separated by '|'.The expected result in the output place CH_out for the first sequence is one message with both payloads aggregated (1, ''text-3-text-1'').Now, when establishing a connection to the database and to the CPN Tools extension server, the data from the connected database tables are queried and the net is initialized with the data from the database in place CH_in.We simulated the aggregator realization in Fig. 24 for the two test sequences, until one sequence was complete.The intermediate marking in m(CH_out) ∼ m exp \ {(''text-4''|''text-2'')}, for m exp (CH_out) = {(''text-3''|''text-1''), (''text-4''|''text-2'')}, will eventually result into the expected outcome in CH_out and the database.Flawed Content-based Router.While the previously discussed pattern implementations are correct, we added a flawed implementation of the Content-based Router, which is not required for the subsequent case studies, so as to demonstrate how the simulation could be used to detect an erroneous design.Content-based Router is a pattern that takes one input message and passes it to exactly one receiver without changing its content.This is done by evaluating a condition per recipient on the content of the message.Fig. 25(a) shows one out of many router implementations, which may look correct, but, however, its process layer violates the correct design.
For the evaluation we use the aforementioned method for "data and (transacted) resource-bound patterns", which is based on the reachability of a correct database state.Such a correct state would be a database instance with one entry in table Channel 1 and an empty table Channel 2 .This should happen due to the fact that the logical expressions on the arcs outgoing from T are expected to be disjoint.Now, let us explore the inner workings of the flawed pattern realization.In Fig. 25(a), transition T reads the token in place I and then conditionally inserts it to the two subsequent places.Since the value of the token matches all conditions, both output places O 1 and O 2 receive a copy of the token  as it is shown in Fig. 25(b).In terms of application integration, this could mean that two companies receive a payment request or a sales order that was actually meant for only one of them.
In the net, the two subsequent transitions push 1 and push 2 are enabled and fire by executing the database inserts defined in the ADD_TO_CHANNEL(i, x) function, where i is being an index of one of the Channel tables and x is a data value to be inserted.From the net alone (i.e., in the initial state in Fig. 25(a)), the pattern realization seems to be correct.However, after its execution, we can see that no correct state has been reached.Indeed, after the tokens have been processed on the control layer, the database instance contains two entries (cf.Fig. 26), one in each table, that, in turn, would mean that the logical expressions that are meant to guard two different outputs are not disjoint, and by executing T we populated both O1 and O2 (instead of generating a token in only one of them).
Note that, when assuming one input token in I and a precedence of push1 over push2, and considering that I exp = {Channel 1 (8)}, the final database instance I(Channel 1 ) comes out to be as expected (that is, I(Channel 1 ) ∼ I(Channel 1 ) exp ), whereas I(Channel 2 ) ̸ ∼ I exp (Channel 2 ).It is easy to see that, knowing the control-flow and data aspects, a given timed db-net allows for detecting flaws in a pattern realizations as well as provide richer information for fixing them.
Conclusions.(5) The CPN Tools extension allows for EIP simulation and correctness testing; (6) model checking implementations beyond correctness testing are desirable.

Applicability: Case studies
The single patterns can be composed to represent integration scenarios, for which we study the formalism with respect to its applicability to two scenarios from the analysis: one hybrid OP2C and one D2C scenario.

Hybrid integration: Replicate material
Many organizations have started to connect their on-premise applications such as Customer Relationship Management (CRM) systems with cloud applications such as SAP Cloud for Customer (COD) using integration processes similar to the one shown in Fig. 27.A CRM Material is sent from the CRM system via EDI (more precisely SAP IDOC transport protocol) to an integration process running on SAP Cloud Platform Integration (SAP CPI). 13 The integration process enriches the message header (MSG.HDR) with additional information based on a document number for reliable messaging (i.e., AppID), which allows redelivery of the message in an exactly-once service quality [4].The IDOC structure is then mapped to the COD service description and sent to the COD receiver.Formalization.For this study, we manually encoded the BPMN scenario into a timed db-net as shown in Fig. 28.Since the obtained timed db-net is mainly a composition of two patterns, namely the Content Enricher in Fig. 10 and the Message Translator in Fig. 6, we just represent an abstract net topology, indicating with places connecting input and output channels of the patterns and leaving nets representing patterns as white boxes.The database schema together with queries from the data logic layer are omitted, since they are identical to those used for the content enricher in Section 4.4.We would like to mention that, while the message translator is close to the currently existing CPN solution 13 SAP CPI, visited 03/2019: https://api.sap.com/.The pattern compositions in this catalog are represented in a BPMN model (e.g., [4]), and thus we subsequently represent our examples in this way -this is also more expressive in terms of message and data representation than the EIP icon notation [2].   in [8], the content enricher (including the need of querying the persistent storage) should be represented using timed db-nets.Consequently, the enricher is a pattern not covered before, for which neither soundness nor correctness could be checked.It is also noteworthy that, given that hybrid integration usually denotes data movement between on-premise and cloud applications, which do not require complex integration logic (see [3]), the timed db-net representation for such rather straightforward hybrid integration scenarios gives richer insight into the data stored in the database as well as their manipulation (as opposed to, for example, BPMN), while the models remain still intuitively understandable.Simulation.The replicate material scenario in a timed db-net (cf.Fig. 28) is implemented as hierarchical net with our CPN Tools extension in Fig. 29, which references the pattern implementations of the enricher from Fig. 23 and translator from Fig. 21, annotated with enricher and translator, respectively.In the hierarchical model representing this scenario, the MSG message from the ERP system is enriched with master data.The derived enriched message of type EMSG is then sent to the translator that maps the intermediate message format to the one understood by the COD system, thus generating a new message of type OUTPUT.Note that here the arc inscriptions abstractly account for messages without revealing their concrete structure.

Internet of things: Predictive Maintenance and Service (PDMS)
In the context of digital transformation, an automated maintenance of industrial machinery is imperative and requires the communication between the machines, the machine controller and ERP systems that orchestrate maintenance and service tasks.Integrated maintenance is realized by one of the analyzed D2C scenarios in Section 6.1, which helps to avoid production outages and to track the maintenance progress.Thereby, notifications are usually issued in a PDMS solution as shown in Fig. 30 from SAP CPI, represented in BPMN according to [4].
Although we simplified the scenario, the relevant aspects are preserved.Industrial manufacturing machines, denoted by Machine, measure their own states and observe their environment  with sensors in a high frequency.When they detect an unexpected situation (e.g., parameter crosses threshold), they send an incident to a local endpoint (e.g., IoT edge system), the PDMS, indicating that a follow-on action is required.The PDMS system creates alerts for the different machines and forwards them to a mediator, connecting the PDMS to the ERP system.To throttle the possibly high frequent alerts, several incidents are collected (not shown) and sent as list of alerts.Before the ERP notification can be created, additional data from the machines are queried based on the split and single alerts, and then enriched with information that adds the feature type.The information of the single alerts is used to predict the impact by value and machine type, and then gets aggregated to be sent to ERP.In case the notification has been created successfully in ERP, the PDMS gets notified including the service task identifier and thus stops sending the alert (not shown).
Formalization.The BPMN scenario from Fig. 30 has been manually translated into a (schematically represented) timed db-net in Fig. 31.Since the scenario mainly relies on the patterns previously discussed in the paper, we omit their explicit representation and put instead white boxes surrounded with input and output channel places of such patterns so as to indicate the connection points between them.Like that, we hide the Splitter (Fig. 7), Content Enricher (Fig. 10) and Aggregator (Fig. 2) proviso that the queries defined on top of the persistent storage are changed according to the database schema adopted in this scenario.Namely, the content enricher will need to query machine states in order to get data (more precisely, additional information about feature types) for enriching the messages, whereas the aggregator will deal with sequences based on the machine identifiers and as the result will produce messages of concatenated machine names.We reveal the only query, namely Q alert , that is used to create alerts in the PDMS system and populates the view place it is assigned to (ch alert ) with pairs containing device identifiers (ID) and corresponding critical values (ACT _VAL).We also add a part representing the predictor pattern.With transition Predict we consume messages (extrep) produced by the enricher and generate new ones together with the prediction by calling a function predict.
Simulation.The predictive maintenance scenario in timed dbnets (cf.Fig. 30) is implemented as hierarchical net with our CPN Tools extension in Fig. 33, which references the pattern implementations of the enricher from Fig. 23 and translator from Fig. 24, annotated with enricher, aggregator, respectively.In the original scenario, the PDMS sends lists of incidents to the integration system to reduce the number of requests as shown in Fig. 32.The incidents have an incident ID, a machine ID, and the actual critical incident value (e.g., (101, 1, 76)).Unfortunately, due to the fact that CPN Tools does not support third party extensions with complex data types like lists, it was decided to make the PDMS component emit single messages using the get report transition together with its outgoing place PDMS (see Fig. 32).Consequently, the splitter is not required for separating the single incidents, but the incident messages  of type REPORT are immediately enriched by the enricher.
After master data has been added to the message, a new one of type E_REPORT has been produced.The net then immediately proceeds with predicting the impact using transition predict, which usually assesses the probability of a timely machine error based on previous experiences with the particular machine type.Here, for simplicity, the prediction is always set to true and the results are placed into prediction result.Tokens in this place are then used to aggregate several incidents by machine, where, for simplicity, we use machine identifiers to identify aggregator's sequences.The aggregated incident messages are then sent to the ERP system.With the final marking in m(ERP) and m exp (ERP) = {''Assembly Robot'', ''Engine Robot''|''Engine Robot''| ''Engine Robot''}, for the three incidents from machine Engine Robot and one from Assembly Robot, we can see that m(ERP) ∼ m exp and thus conjecture that the scenario is correct.
Although the resulting timed db-net provides so far unmatched insights into the different aspects of integration scenarios, the complexity of the composed patterns increased even more, when using hierarchical nets.Conclusions.(9) timed db-net representations allow for an explicit modeling of all data aspects in complex data-aware scenarios (e.g., roll-back, queries); (10) the formalism's technical complexity might prevent non-technical users from using it on a regular basis.

Discussion
With the timed db-net formalization, it is possible to model and reason about EAI requirements like data, transacted resources and time (cf.conclusions (1), ( 5)), going beyond the simple hybrid integration scenarios (cf.conclusion (3)).Thereby the pattern realizations are self-contained, can be composed into complex integration scenarios (e.g., Fig. 31; cf.conclusion (7)) and analyzed (cf.conclusions (4)), while leaving the extension of our tool prototype to model checking as well as a formal treatment of pattern compositions as future work (cf.conclusions ( 6), (8), respectively).The composition is facilitated through "sharing" control places, preventing unwanted side-effects between patterns.
However, there are some limitations that we briefly discuss next.PN classes considered in this work fall short when it comes to generation of places, transitions or arcs (cf.conclusion (2)).For example, Dynamic Router requires a proper representation of dynamically added or removed channels.Further, the deep insights into data-aware patterns and scenarios lead to the tradeoff between sufficient information and model complexity (cf.conclusion ( 9)).The complexity of PN models compared to their BPMN counterparts in Fig. 30 might not allow for modeling by non-technical users (cf.conclusion (10)).Hence, we propose modeling in a less technical modeling notation, which can be then encoded into PN models, e.g., for verification.Further, while the PN formalism closes the conceptual vs. implementation gap by simulation, we leave a translation of existing EIP implementations to timed db-nets for verification as future work.
In summary, timed db-nets allow to represent patterns not covered before (e.g., the stateful aggregator with a timeout or the content enricher with external resources) and check their soundness and correctness.Note that for more complex scenarios the timed db-net representation might become very complex, e.g., compared to a BPMN representation, and thus might be more suitable as formalism and not as modeling language for the average user (e.g., integration developer).

Related work
We discussed related Petri net approaches in Sections 2 and 3. We now briefly situate our work within the context of further Petri net formalisms, and formalizations from EAI and related domains.

Petri net formalizations
Using PNs, van Hee et al. [40] define an alternative approach for representing and reasoning on database transactions using special token vectors with identifiers and inhibitor nets.While this could also be used similar to db-nets, we build our formalism on db-nets due to their more comprehensive focus on (relational) data, operations, and persistent storage.Furthermore, there is work on ITCPN by van der Aalst [19] and stochastic PNs by Zenie [25] that are either too restricted by time intervals with a single global time in case of ITCPN or hard to practically reason as in case of the stochastic nets.However, both works helped during the specification of the timed db-net models.Stochastic PNs [25] define a priority function, whose execution semantics however does not suffice in representing the required ordering in REQ-1(a).

Enterprise application integration
We found [3] that the only existing formalization of EIPs is provided in [8] using CPNs.In particular, Fahland et al. [8] define messages as colored tokens and uses PN transition guards as conditions.However, it does not cover all requirements we singled out, and hence we employ db-nets [10] as an extension of CPNs that covers all but one of the EIPs as discussed before.In the business process domain PNs were successfully used to model and reason about workflow nets [41] and some resource- [42] and data-aware [43]) extensions, without however tackling EIP requirements.Although our create notification and predictive maintenance scenarios have been captured in BPMN [4], we do not consider BPMN as a suitable formalism for our objectives (i)-(iii), however, build on a formalization by PNs, which were employed to define the BPMN control-flow semantics [44].
Similar to the BPMN and PN notations, several domain-specific languages (DSLs) have been developed that describe integration scenarios.Apart from the EIP icon notation [2], there is also the Java-based Apache Camel DSL by Ibsen et al. [31], and the UML-based Guaraná DSL by Frantz et al. [45].However, none of these languages aim to be verification-friendly formal integration scenario representations.Conversely, we do not strive to build another integration DSL.Instead we claim that all of the integration scenarios expressed in such languages can be formally represented in our formalism, so that formal analysis can be applied to their scenarios.
There are also other works on formal representations of integration patterns, e.g., Mederly et al. [46] represents messages as first-order formulas and patterns as operations that add and delete formulas, and then applies AI planning to find a process with a minimal number of components.While this approach shares the formalization objective, our approach applies to a broader set of objectives (e.g., formal analysis, simulation).Furthermore, the data, transactional database and time semantics are not covered (e.g., cf.REQ-2, REQ-3, REQ-4).Mendes et al. [47] use "high-level" Petri nets as a language for the verification of serviceoriented manufacturing systems, that is similar to the approach of Fahland and Gierds [8] for the integration patterns.

Interaction and architecture patterns
In the related service-oriented architecture domain, service interactions and service interaction patterns were formalized.Works on service interactions largely target formalizations of service orchestration and choreographies (i.e., similar to compositions of patterns).Those, e.g., for web services [48][49][50] are mainly based on process algebras that account for our time requirements, but, however, lack database transaction semantics (cf.REQ-4).

Business process management
An early algorithmic work by Sadiq and Orlowska [55] applied reduction rules to workflow graphs for the visual identification of structural conflicts (e.g., deadlocks) in business processes.From a control flow perspective, we use a similar base representation, which we extend by data, transacted database semantics, and time (e.g., cf.REQ-2, REQ-3, REQ-4).Furthermore, we use graph rewriting for optimization purposes.In Cabanillas et al. [56], the structural aspects are extended by a data-centered view of the process that allows to analyze the life cycle of an object, and check data compliance rules.Although this provides a view over the required data, it neither covers transactional database or time aspects (e.g., cf.REQ-3, REQ-4) nor proposes formal analysis capabilities for the extended EIPs.All in all, the main focus is rather left on the object life cycle analysis of the process.
In the workflow interaction domain, again workflow nets and workflow modules are used, e.g., by van der Aalst and Weske [57], and Martens [58], respectively.Furthermore, service interaction patterns are formalized using the composition capabilities of Petri nets provided by open nets (e.g., by van der Aalst et al. [59] or as open workflow nets by Massuthe et al. [60]) that will become of interest for formalizing compositions of timed db-nets, which is left as future work.

Conclusion
This work aims at providing the formal underpinning for responsible EAI along research questions Q1-Q4.Responsible EAI means to ground EIPs as basic building blocks on a formalization that meets relevant EAI requirements with respect to control flow, data, time, and transactional properties (↦ → Q1).Q1 could be sufficiently addressed by a thorough analysis of EAI requirements in comparison with existing formalisms and the development of the timed db-net formalism that adds the crucial, yet missing time requirement to existing formalism of db-nets [10].Moreover, the formalism is supposed to be equipped with full execution semantics which was achieved for timed db-net in this work (↦ → Q2).With this the formalization and execution of EIPs becomes possible.In order to bridge the EIP formalization to EIP realizations, an instructive catalog of realizations of EIPs as timed db-net models was provided (↦ → Q3).With the verification results and the possibility of correctness testing based on execution traces, validation of EIPs realizations is enabled (↦ → Q4).This is complemented with the possibility to simulate EIP realizations through a prototypical implementation (↦ → Q4).In summary, the research questions and objectives set out in the beginning of the paper could be addressed.
This work focuses on the formalization and realization of EIPs in an isolated manner.EAI solutions, however, often require the composition of EIPs.Such EIP composition necessitate formal treatment as well.Putting the responsible design of EIPs at stake through their informal compositions is counterproductive.Hence, future work will address the formalization and realization of EIP compositions in interplay with the formalization and realization of EIPs as proposed in this work.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

√
We capture the 11 patterns with time-related requirements as REQ-3 "time": (a) Timeout: numerical representation of fixed, relative time (i.e., no global time); (b) Expiry date: discrete point in time according to a global time (i.e., based on existing message content); (c) Delay: numerical, fixed time value to wait or pause until continued (e.g., often used in a redelivery policy); (d) Message/time ratio: number of messages that are sent during a period of time.Consequently, a quantified, fixed time delay or duration semantics is required.

Fig. 15 .
Fig. 15.A finite db-net execution of a Content-based Router.

Fig. 16 .
Fig. 16.A finite execution trace of a load balancer in timed db-net.Here we use Empl to define an input place with employees and count as a function that counts a number of tokens in marking m.

Example 12 .
Fig. 5 with an initial B − snapshot s 1 containing a marking m with several messages composed of employee names and ages m = {(Jane, 23), (Lisa, 35), (Joe, 47), (Paul, 65)}.In the example shown in Fig. 16, three of the tokens have been already distributed.This means that the current, observable snapshot s i has a database instance with two tuples CountCh1(1) and CountCh2(2), and a marking with one token in the input state m(ch in ) = {(Jane, 23)}, one token in the first channel m(ch 1 ) = {(Lisa, 35)} and two tokens in the second channel m(ch 2 ) = {(Joe, 47), (Paul, 65)}.Assuming that in the current state ϕ 1 holds, we can fire transition T 1 .This, in turn, generates a new state s i+1 .In order to check

Example 15 (
Content-based Router).While the Content-based Router in Fig. 9 represents the pattern correctly, one could go wrong with the ordered execution (cf.REQ-1(a)), e.g., through transition guards at T 1 and T ′ 1 .If these guards were set with overlapping conditions, then several tokens are produced in different output places, i.e., m(ch out 1 = {(Paul, 65)}), m(ch out 2 = {(Paul, 65)}), which does not match the desired state in the example in Fig. 15.■

Fig. 29 .
Fig. 29.A pattern composition for the replicate scenario as a hierarchical net in CPN Tools.

Fig. 30 .
Fig. 30.Predictive maintenance -create notification scenario as modeled by a user (an "internet of things" scenario).

Fig. 32 .
Fig. 32.Create notification pattern composition as a hierarchical net in CPN tools (in the initial state).

Fig. 33 .
Fig. 33.Create notification pattern composition as a hierarchical net in CPN tools (in the final state).

Table 1
Formalization requirements (covered