Next Article in Journal
Blockchain-Assisted Adaptive Reconfiguration Method for Trusted UAV Network
Next Article in Special Issue
A Piece-Wise Linear Model-Based Algorithm for the Identification of Nonlinear Models in Real-World Applications
Previous Article in Journal
Adjacent Frame Difference with Dynamic Threshold Method in Underwater Flash Imaging LiDAR
Previous Article in Special Issue
Simple Learning-Based Robust Trajectory Tracking Control of a 2-DOF Helicopter System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging a Heterogeneous Ensemble Learning for Outcome-Based Predictive Monitoring Using Business Process Event Logs

1
Department of Information Systems, University of Maryland, Baltimore County (UMBC), MD 21250, USA
2
Department of Industrial Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(16), 2548; https://doi.org/10.3390/electronics11162548
Submission received: 7 July 2022 / Revised: 5 August 2022 / Accepted: 12 August 2022 / Published: 15 August 2022
(This article belongs to the Collection Predictive and Learning Control in Engineering Applications)

Abstract

:
Outcome-based predictive process monitoring concerns predicting the outcome of a running process case using historical events stored as so-called process event logs. This prediction problem has been approached using different learning models in the literature. Ensemble learners have been shown to be particularly effective in outcome-based business process predictive monitoring, even when compared with learners exploiting complex deep learning architectures. However, the ensemble learners that have been used in the literature rely on weak base learners, such as decision trees. In this article, an advanced stacking ensemble technique for outcome-based predictive monitoring is introduced. The proposed stacking ensemble employs strong learners as base classifiers, i.e., other ensembles. More specifically, we consider stacking of random forests, extreme gradient boosting machines, and gradient boosting machines to train a process outcome prediction model. We evaluate the proposed approach using publicly available event logs. The results show that the proposed model is a promising approach for the outcome-based prediction task. We extensively compare the performance differences among the proposed methods and the base strong learners, using also statistical tests to prove the generalizability of the results obtained.

1. Introduction

Business process monitoring is concerned with extracting previously unknown and valuable insights about business processes from historical data, usually available as so-called event logs [1]. Event logs contain events. Each event is characterized by information regarding the process execution to which it belongs, a.k.a. process case, the activity that was executed, a timestamp capturing the time instant at which an activity was executed, and other domain-specific attributes, such as the human resource that executed the activity. We refer to the sequence of events belonging to the process case as a trace. For instance, in a process about granting building permits by a public administration, a trace collects the events regarding the processing of one specific building permit application.
Predictive process monitoring of business processes has emerged in the last ten years and aims at extracting insights about business processes by building predictive models using event log data [2]. There are several aspects of a process that can be predicted, such as the timestamp of next events [3], the future activities that will be executed in a process case [4,5,6], or the outcome of a process case [7]. Predictive monitoring enables proactive decision making, such as informing customers that their request may be processed later than expected or addressing the possible occurrence of an unfavorable exception by implementing protective measures.
Predictive process monitoring models are developed using classification and regression techniques. Predicting future activities in a case or the outcome of process cases entail the use of classification techniques, whereas predicting timestamps requires regression techniques. In this paper, we focus on outcome-based predictive monitoring, where normally the aim is to predict a binary label capturing the outcome of process cases, e.g., positive vs. negative or regular vs. deviant.
The research on process-outcome-based predictive monitoring has evolved along two main lines. On the one hand, recently, one objective has been to devise more advanced features that can improve the predictive power of the models. Examples are features capturing the load level of the system in which business processes are executed [8] or features capturing the experience of the human resources involved in a process [9]. On the other hand, the objective of researchers historically has been to develop more accurate predictive models. Classifier ensembles have demonstrated some advantages over individual classifiers when they are used in outcome-based predictive monitoring. Teinemaa et al. [7] have reported that XGB emerged as the best overall performer in over 50% of datasets considered in their extensive benchmark. However, its success is assessed in the benchmark against a restricted alternative ensemble architecture.
In this paper, we leverage an advanced heterogeneous ensemble learning method for outcome-based predictive process monitoring. Specifically, we propose to adopt a stacking ensemble technique involving strong learners [10]. In many classification problems, in fact, only weak classifiers such as decision trees or feed-forward neural networks are chosen as base classifiers. Rather than utilizing a weak classifier, we consider in this paper the potential of constructing a classifier ensemble using strong learners. We build an ensemble scheme based on the stacking algorithm, where the base learners are other ensemble learners, i.e., extreme gradient boosting machine (XGB), random forest (RF), and gradient boosting machine (GBM). While XGB and RF have been considered already in previous benchmarks on outcome-based predictive monitoring, GBM has not been considered by previous research in this particular classification problem.
We carried out an extensive experiment considering 25 event log datasets publicly available. Moreover, to provide a fair assessment, we consider an extensive set of performance measures—i.e., F1, F2, MCC, accuracy, area under ROC curve (AUC), and area under precision recall (AUCPR) indices—and use statistical tests to assess the significance of the performance levels and rankings obtained in the experiment. The results show that the proposed model generally outperforms the baselines, i.e., the strong learners used as base models for the ensemble. The performance difference is significant, particularly when considering measures that are more appropriate for imbalanced datasets, such as MCC and AUCPR. Note, in fact, that most outcome labels in most publicly available real-world event logs are strongly imbalanced.
The paper is organized as follows. Section 2 discusses the related work. Section 3 formally defines the problem of outcome-based predictive process monitoring and introduces the stacked ensemble learning method to address it. The evaluation of the proposed method is presented in Section 4 and conclusions are drawn in Section 5.

2. Related Work

Di Francescomarino et al. [11] provided a qualitative value-driven analysis of various predictive process monitoring techniques to assist decision-makers in selecting the best predictive technique for a given task. The review presented by Marquez-Chamorro et al. [2] considers standard criteria for classifying predictive monitoring approaches in the literature, such as the prediction task or technique used. Furthermore, it categorizes approaches in the literature based on their process-awareness, i.e., whether or not an approach employs an explicit representation of process models. Santoso [12] specified a language for properly defining the prediction task, enabling researchers to express various types of predictive monitoring problems while not relying on any particular machine learning techniques.
As far as outcome-oriented predictive monitoring is concerned, Teinemaa et al. [7] developed a comprehensive analysis and quantitative benchmark of different encoding techniques and models using real-world event logs. In this benchmark, RF and XGB are the only ensemble models considered. XGB emerges as the top-performing classifier in the majority of the prediction tasks.
Verenich et al. [13] proposed a transparent approach for predicting quantitative performance indicators of business process performance. The predicted indicators could be more explainable since they were decomposed into elementary components. The explainability of process outcome predictions was addressed recently by Galanti et al. [14] using the SHAP method.
Recently, researchers have increasingly focused on applying deep learning techniques to solve the problem of process outcome prediction. Rama-Maneiro et al. [15] provided a systematic literature review of deep learning techniques for predictive monitoring of business process, discussing an in-depth analysis and experimental evaluation of 10 approaches involving 12 event log datasets. Similarly, in Neu et al. [16], a systematic literature review was carried out to capture the state-of-the-art deep learning methods for process prediction. The literature is classified along the dimensions of neural network type, prediction type, input features, and encoding methods.
Kratsch et al. [17] compared the performance of deep learning, e.g., feed forward neural networks and LSTM networks, and machine learning algorithms, i.e., random forests and support vector machines using five publicly available event logs. Metzger et al. [18] proposed an ensemble of deep learning models that can produce outcome predictions at arbitrary points during process executions. Wang et al. [19] proposed a real-time, outcome-oriented predictive process monitoring method based on bidirectional LSTM networks and attention mechanisms. Better performance could be achieved as the features having a decisive effect on the outcome were identified and optimized.
To address the issue of inaccurate or overfitting prediction models, a fine-tuned deep neural network that learns general-enough trace representations from unlabeled log traces was proposed in [20]. Pasquadibisceglie et al. [21] leveraged a convolutional neural network (CNN) architecture to classify the outcome of an ongoing trace, showing that the proposed technique could be integrated as an intelligent assistant to support sales agents in their negotiations.
Generally, the deep learning approaches in the literature do not always necessarily outscore other more traditional techniques, such as ensembles (e.g., XGB and RF). Therefore, devising novel architectures based on such traditional techniques is still relevant in this prediction context, in particular, to avoid the high training costs (time and computational resources) of deep-learning-based architectures.

3. Problem Definition and Method

In this section, we first formalize the problem of outcome-based process predictive monitoring. Then, we present in detail the proposed stacking ensemble method using strong learners.

3.1. Problem Definition

In an event log, a trace represents the sequence of events recorded during the execution of a business process instance (i.e., a case). Each event in a trace records the information about a specific activity that occurs during the execution of a process case.
We denote the collection of all event identifiers (event universe) by E and the universe of attribute names by A . An event e is a tuple e = ( c , a , t ( d 1 , v 1 ) , , ( d m , v m ) ) , where c is the case id; a is the activity recorded by this event; t is the timestamp at which the event has been recorded; and ( d 1 , v 1 ) , , ( d m , v m ) , with m 0 , are other domain-specific attributes and their values. For instance, the event e = ( 5 , c h e c k , 2022.1 . 2 , r e s o u r c e = A l i c e , a m o u n t = 1000 , t y p e = e l i g i b i l i t y ) captures the fact that, in a process case associated with loan request number 5, the human resource Alice has executed an eligibility check of a loan request of 1000 USD on 2 January 2022. The value of attribute d m of an event e is denoted by the symbol # m ( e ) . The timestamp of the event e, for example, can be represented as # t ( e ) . Whenever an event e does not have a value for an attribute d i , we write v i = (where ⊥ is the undefined value). For instance, the human resource associated with an event may have not been recorded.
We denote a finite series over E of length m by the mapping ω : { 1 , . . . , m } E , and we denote this sequence by the tuple of elements of E denoted by the symbols ω = e 1 , e 2 , . . . , e m , where e j = ω ( j ) for each of the integers j { 1 , . . . , m } . E * is used to represent the set of all finite sequences over E , whereas | ω | is used to express the length of a sequence ω .
An event trace τ is a finite sequence over E such that each event e E occurs only once in τ , i.e., τ E * , and for 1 j < k | τ | , we have τ ( j ) τ ( k ) , where τ ( j ) refers to the event of the trace τ at the index j. We assume τ = e 1 , e 2 , . . . , e m to be a trace, and let τ l = e 1 , e 2 , . . . , e l be the l-length trace prefix of τ (for 1 l < m ). Lastly, an event log L is a set of traces such that each event occurs at most once in the entire log—i.e., for each τ 1 , τ 2 L such that τ 1 τ 2 , we have that τ 1 τ 2 = , where τ 1 τ 2 = { e E } j , k Z + .   τ 1 ( j ) = τ 2 ( k ) = e .
Outcome-oriented predictive process monitoring seeks to make predictions concerning the outcome of a trace given a series of completed traces (i.e., event log) with their known outcomes. Let S be the universe of all possible traces. A labeling function y : S Y maps the trace τ to its class label (outcome), y ( τ ) Y . For making predictions about what might happen, Y is a finite collection of distinct categorical outcomes. In this work, and normally in the outcome-based predictive monitoring literature, we consider a binary outcome label, i.e., Y = { 1 , + 1 } . The classification model uses independent variables (referred to as features) and learns a formula to estimate the dependent variable (i.e., the class label). Hence, it is necessary to encode every event log trace as a feature vector in order to train a classification model to learn the class label.
We formally specify a trace encoder and a classification model as follows. A trace encoder f : S X 1 × X q is a function that transforms a trace τ and coverts it into a feature vector in the q-dimensional vector space X 1 × X q , where X r R , 1 r q denotes the domain of the r-th feature. A classification model is a function that classifies a feature vector based on class labels. A classification model c l s : X 1 × X q Y is a function that converts an encoded q-dimensional trace and estimates its class label.

3.2. Proposed Prediction Model

We propose a stacking model for combining strong learners in an ensemble for outcome-based predictive monitoring of business process. In this work, we consider the following strong learners: XGB, RF, and GBM. XGB and RF have been successfully employed in outcome-based predictive monitoring in the past (Teinemaa et al. [7]), whereas GBM is acknowledged to be a high-performing learner in many different classification scenarios.
The stacking is based on the super learner technique [10], in which each base classifier is trained using an internal k-fold cross validation. Stacking ensemble, or stacked generalization [10,22], entails training a second-level metalearner to determine the best mixture of constituent learners. The aim of our stacking ensemble model is to blend together strong and varied groups of learners.
Algorithm 1 outlines the procedures required to build a stacked generalization ensemble in the case of outcome-based predictive process monitoring. Let D be an event log training subset with i instances and j features. Each constituent learning algorithm C undergoes a 10-fold cross-validation ( 10 c v ) on the training set. The same type of 10 c v (e.g., stratified in our case) must be specified. The cross-validated prediction outcomes R 1 , R 2 , . . . , R C are aggregated to form a new matrix T . Together with the initial response vector Y , they train and cross-validate the metalearner, which in our case is a generalized linear model. Once the metalearning model is constructed, the proposed ensemble model that is composed of constituent learning models and the metalearning model are employed to generate predictions on the event log testing subset.
Algorithm 1 Procedure to construct a stacked generalization ensemble with an internal 10 c v for outcome-based predictive monitoring of business process
Preparation:
Event log training dataset, D with i rows and j columns, depicted as input matrix X and response matrix Y .
i { X j Y
Set tuned C constituent learning algorithms, i.e., RF, GBM, and XGB.
Set the metalearner, e.g., generalized linear model.
Training phase:
Train each C on the training set using stratified 10 c v .
Gather the prediction results, R 1 , R 2 , . . . , R C
Gather P prediction values from C models and construct a new matrix T = P × C
Along with original response vector Y , train and cross-validate metalearner: Y = f ( T ) .
i { R 1 R C Y P { T C Y
Prediction phase:
Collect the prediction results from C models and feed into metalearner.
Collect the final stacked generalization ensemble prediction.
The hyperparameters of each constituent learner are tuned using random search [23]. The details of hyperparameters’ search space as well as the best values obtained to train each constituent learner on each dataset are reported in Appendix A. For the implementation, we utilize the H 2 O machine learning framework, which provides an interface in R to run the experiment.
More details about the base learners utilized in this study along with details regarding the hyperparameter settings to tune are presented next.
(a)
Random forest (RF) [24]
A variant of bagging ensemble, in which a decision tree is employed as the base classifier. It is composed of a set of tree-structured weak classifiers, each of which is formed in response to a random vector Θ k , where Θ k , k = 1 , . . . , L are all mutually independent and distributed. Each single tree votes a single unit, voting for the most popular class represented by the input x. The hyperparameters to specify to build a random forest model are the number of trees ( n t r e e s ), minimum number of samples for a leaf ( m i n _ r o w s ), maximum tree depth ( m a x _ d e p t h ), number of bins for the histogram to build ( n b i n s and n b i n s _ c a t s ), row sampling rate ( s a m p l e _ r a t e ), column sampling rate as a function of the depth in the tree ( c o l _ s a m p l e _ r a t e _ l e v e l ), column sample rate per tree ( c o l _ s a m p l e _ r a t e _ t r e e ), minimum relative improvement in squared error reduction in order for a split to occur ( m i n _ s p l i t _ i m p r v ), and type of histogram to use for finding optimal split ( h i s t o g r a m _ t y p e ).
(b)
Gradient boosting machine (GBM) [25]
A forward learning ensemble, where a classification and regression tree (CART) is used as the base classifier. It develops trees in a sequential fashion, with subsequent trees relying on the outcomes of the preceding trees. For a particular sample S, the final estimate h ( x ) is the total of the estimates from each tree. The hyperparameters to specify to build a gradient boosting machine model are the number of trees ( n t r e e s ), minimum number of samples for a leaf ( m i n _ r o w s ), maximum tree depth ( m a x _ d e p t h ), number of bins for the histogram to build ( n b i n s and n b i n s _ c a t s ), learning rate ( l e a r n _ r a t e ), column sampling rate ( c o l _ s a m p l e _ r a t e ), row sampling rate ( s a m p l e _ r a t e ), column sampling rate as a function of the depth in the tree ( c o l _ s a m p l e _ r a t e _ l e v e l ), column sample rate per tree ( c o l _ s a m p l e _ r a t e _ t r e e ), minimum relative improvement in squared error reduction in order for a split to occur ( m i n _ s p l i t _ i m p r v ), and type of histogram to use for finding the optimal split ( h i s t o g r a m _ t y p e ).
(c)
Extreme gradient boosting machine (XGB) [26]
One of the most popular gradient boosting machine frameworks that implements a process called boosting to produce accurate models. Both gradient boosting machine and extreme gradient boosting machine operate on the same gradient boosting concept. XGB, specifically, employs a more regularized model to prevent overfitting, which is intended to improve the performance. In addition, XGB utilizes sparse matrices with a sparsity-aware algorithm that allows more efficient use of the processor’s cache. Similar to previous classifiers, there are hyperparameters that must be specified when creating an XGB model such as number of trees ( n t r e e s ), minimum number of samples for a leaf ( m i n _ r o w s ), maximum tree depth ( m a x _ d e p t h ), learning rate ( l e a r n _ r a t e ), column sampling rate ( c o l _ s a m p l e _ r a t e ), row sampling rate ( s a m p l e _ r a t e ), column sample rate per tree ( c o l _ s a m p l e _ r a t e _ t r e e ), and minimum relative improvement in squared error reduction in order for a split to happen ( m i n _ s p l i t _ i m p r v ).

4. Evaluation

This section presents the evaluation of the proposed predictive model. We first introduce the datasets considered in the evaluation. Then, we discuss in detail the settings of the experiments and the performance metrics considered in the experimental evaluation.

4.1. Datasets

To compare how the proposed classification model performs in different situations, we considered 7 real-world event logs. The event logs are publicly available: four of them are available at the 4TU Centre for Research Data https://data.4tu.nl/info/en/, accessed on 6 July 2022, whereas the oth er three have been made available by the ISA Group at the University of Seville (https://www.isa.us.es/predictivemonitoring/ea/#datasets, accessed on 6 July 2022).
For each event log, one or more labeling functions y can be defined. Each labeling function, depending on the process owner’s objectives and requirements, defines a different outcome for the cases recorded in an event log. From the experimental evaluation standpoint, each outcome corresponds to a separate predictive process monitoring task. A total of 25 separate outcome prediction tasks were defined based on the 7 original event logs.
Table 1 shows the characteristics of each dataset used in this work, such as the total number of samples ( ς T ); the number of samples labeled positive ( ς + ); the number of samples labeled negative ( ς ); the number of attributes; and the imbalance ratio ( I R ), which is calculated as the ratio between the number of samples of the minority class (i.e., the least frequent class) and the number of samples of the majority class (i.e., the most frequent class). Significantly imbalanced datasets have a low I R value and vice versa. Normally, datasets with an I R lower than 0.5 are considered strongly imbalanced. Note that only 6 of the datasets considered in this evaluation have I R greater than 0.5. Therefore, this evaluation considers mostly data that are strongly imbalanced. This can be expected since companies normally strive to achieve a positive process outcome. Hence, in a normal situation, negative outcomes should be a small fraction of the total number of outcomes—i.e., cases—recorded in an event log.
The event logs and the labeling functions to create the datasets are discussed in detail next.
1.
BPIC 2011
This log records the events of a process in a Dutch academic hospital over a three-year period. Each process case compiles a patient’s medical history, where operations and therapies are recorded as activities. There are four labeling functions defined for this event log. Each label records whether a trace τ violates or fulfills linear temporal logic constraints defined over the order and occurrence of specific activities in a trace φ [27]:
  • b p i 11 . f 1 : φ = F ( tumor marker CA - 19.9 ) F ( ca - 125 using meia )
  • b p i 11 . f 2 : φ = G ( CEA - tumor marker using meia
    F ( squamous cell carcinoma using eia ) )
  • b p i 11 . f 3 : φ = ¬ ( histological examination - biopsies nno ) (“ca-125 using meia”)
  • b p i 11 . f 4 : φ = F ( histological examination - bug resectiep )
2.
BPIC 2012
Each case in this event log records the events that occurred in connection with a loan application at a financial institution. Three different labeling functions are defined for this event log, depending on the final result of a case, i.e., whether an application is accepted, rejected, or canceled. In this work, we treat each labeling function as a separate one, which leads us to consider three datasets ( b p i 12 . a c , b p i 12 . c c , and b p i 12 . d c ) with a binary label.
3.
BPIC 2013
This event log records events of an incident management process at a large European manufacturer in the automotive industry. For each IT incident, a solution should be created in order to restore the IT services with minimal business disruption. An incident is closed after a solution to the problem has been found and the service restored. In this work, we use the same datasets already considered by Marquez et al. [2]. In their work, the authors consider three distinct prediction tasks, depending on the risk circumstances to be predicted. In the first one, a push-to-front scenario considers the situation in which first-line support personnel are responsible for handling the majority of occurrences. A binary label is assigned to each incident depending on whether it was resolved using only the 1st line support team or if it required the intervention of the 2nd or 3rd line support team. As in the original publication [2], for this binary label, a sliding window encoding is considered, leading to five datasets ( b p i 13 . i , with i = 2 , . . . , 5 ), where i specifies the number of events. Another situation in this event log concerns the abuse of the wait-user substatus, which should not be utilized by action owners unless they are truly waiting for an end-user, according to company policy. Further in this case, five datasets b p i 13 w u p . i are available, where i is the size—i.e., number of events—of the window chosen for the encoding. A third situation concerns anticipating the ping-pong behavior, in which support teams repeatedly transfer incidents to one another, increasing the overall lifetime of the incident. Two datasets are defined ( b p i 13 p p . 2 and b p i 13 p p . 3 ) for the window size i = 2 , 3 .
4.
BPIC 2015
This dataset contains event logs from five Dutch municipalities regarding the process of obtaining a building permit. We consider each municipality’s dataset as a distinct event log and use the same labeling function for each dataset. As with BPIC 2011, the labeling function is determined by the fulfillment/violation of an LTL constraint. The prediction tasks for each of the five municipalities are designated by the abbreviation b p i 15 . k , where k = 1 , . . . , 5 denotes the municipality’s number. The LTL constraint that is utilized in the labeling functions is
b p i 15 . k : φ = G ( send confirmation receipt F ( retrieve missing data ) )
5.
BPIC 2017
This dataset is an updated version of the BPIC 2012 event log containing events captured after the deployment of a new information system in the same loan application request management process at a financial institution. Even for this event log, three labeling functions are defined based on the outcome of a loan application (accepted, canceled, rejected), which leads to the three datasets b p i 12 . a , b p i 12 . c , and b p i 12 . r .

4.2. Experimental Settings and Performance Metrics

The data preparation phase in outcome-based process predictive monitoring entails extracting the prefixes of the traces in an event log, defining the features, and encoding the prefixes [7]. For the BPIC 2013 datasets, we use the same encoding used in [2] and discussed in Section 4.1; the encoding used for the other datasets is presented next. For each trace in a dataset, we extracted prefixes until the second-last event. As far as the event log encoding is concerned, we used last-state encoding [7]—that is, for each prefix extracted, we encode the attributes of its last event and the case-level attributes, i.e., the attributes that are constant for all prefixes. We then use the index-based strategy to generate features, which creates a separate feature for every attribute in the encoded prefixes, with the only exception of the timestamp, for which we generate separate features for the time of day, day, and month. Categorical attributes are one-hot encoded, whereas numerical attributes are encoded as-is. Note that the aim of this experiment is not to compare different encoding and/or feature engineering techniques, but to establish the level of performance of the proposed stacking ensemble scheme in outcome-based process predictive monitoring. With this aim in mind, we argue that, on the one hand, the design choices that we made in this experimental evaluation are fairly standard in the literature while, on the other hand, they have allowed us to keep the number of experiments manageable within a reasonable timeframe.
In the experiments, we adopted a subsampling validation technique (five runs of 80/20 hold-out), where the final result for each model and dataset is the average of 5 runs. As previously stated, a total of 25 event log datasets (see Table 1) were considered, along with 4 classification algorithms (the proposed one and its three base models), giving a total of 100 classifier–dataset pairs. All experiments were conducted on a machine with an Intel Xeon processor, 32 GB of memory, and running the Linux operating system. The code to reproduce the experiment is publicly available at https://bit.ly/3tlZIIT (accessed on 7 July 2022).
The performance of a model on a dataset is evaluated based on six different metrics: accuracy, the area under the receiver operating characteristic curve (AUC), the area under the precision–recall curve (AUCPR), F1-score, F2-score, and Matthews Correlation Coefficient (MCC). Next, we briefly outline the definition of these metrics.
A classification algorithm predicts the class for each data sample, providing a predicted label (i.e., positive or negative) to each sample. As a result, each sample belongs to one of these four categories at the end of the classification process:
  • T P : positive samples that are (correctly) predicted as positive (True Positives).
  • T N : negative samples that are (correctly) predicted as negative (True Negatives).
  • F P : negative samples that are (wrongly) predicted as positive (False Positives).
  • F N : positive samples that are (wrongly) predicted as negative (False Negatives).
This categorization is typically displayed in a confusion matrix T = T P F N F P T N , which summarizes the outcome of a binary classification. Let us denote F N + T P = ς + and F P + T N = ς ; then, a classifier has perfect performance if T = ς + 0 0 ς . From the confusion matrix T, several performance metrics can be derived as follows.
Accuracy is the ratio between the correctly predicted samples and the total samples (i.e., ς T ) in the dataset:
A c c u r a c y = T P + T N ς T
The F β score is defined as the harmonic mean of precision and recall. Precision is the ratio of true positives ( T P ) on all the predicted positives ( T P + F P ), while recall is the ratio of the true positives on the actual positives ( ς + ); β is a parameter of the harmonic mean. The common formulation of the F β score is the following:
F β = ( 1 + β 2 ) . precision . recall ( β 2 . precision ) + recall , { β R β > 0 }
A U C refers to the summarization of the area under the receiver operating characteristic (ROC) curve. It measures the probability of recall in the vertical axis against fallout in the horizontal axis at different thresholds. It is formally estimated as
A U C = 0 1 recall ( fallout ) d fallout = 0 1 recall ( fallout 1 ( x ) ) d x
where recall and fallout can be obtained by T P ς + and F P ς , respectively.
A U C P R is a less common performance measure defined as the area under the precision–recall curve. Even though AUCPR is less common in usage, it is deemed to be more informative than AUC, particularly on the imbalanced classification task [28]. For the calculation of AUCPR, the interpolation between two points α and β in the AUCPR space is specified as a function:
y = T P α + x T P α + x + F P α + ( ( F P β F P α ) · x ) T P β T P α
where x is any real value between T P α and T P β .
The M C C is a contingency matrix approach of determining the Pearson product moment correlation coefficient between the actual and predicted samples:
M C C = T P . T N F P . F N ( T P + F P ) . ς . ( T N + F N ) . ς +
These abovementioned metrics were adopted to provide more realistic estimates on the behavior of the investigated classifiers. While accuracy and F β are two widely used measures in machine learning research, they may provide inaccurate findings when used with imbalanced datasets because they do not account for the ratio of positive to negative classes. Chicco and Jurman [29] have shown that the objective of MCC is obvious and concise: to obtain a high-quality score, the classifier must correctly predict the majority of negative examples and the majority of positive examples, regardless of their ratios in the entire dataset. F1 and accuracy, however, produce trustworthy estimates when applied to balanced datasets, but offer inaccurate results when applied to imbalanced data problems. In addition, Chicco and Jurman [29] proved that MCC is more informative and truthful than balanced accuracy, bookmaker informedness, and markedness metrics. We consider AUCPR for evaluating the performance of classifier since it has been found to be informative than AUC, particularly when dealing with imbalanced cases [28].

4.3. Results

The objectives of the experimental evaluation are to appraise whether the proposed approach outperforms the base models (RF, XGB, and GBM) and whether the performance difference, if any, is statistically significant. To address these objectives, we first analyze the performance obtained by the proposed method and the base models using the six performance metrics defined in the previous section. Note that, while presenting the results, to facilitate the comparison between the proposed model and the baselines, we always consider the performance aggregated across all the prefixes extracted from the datasets—that is, we do not breakdown the performance of the models by prefix length.
Figure 1 shows the average performance of the proposed and base models over different event logs. The proposed model (e.g., PROP) demonstrates its superiority over its base models, irrespective of the performance metrics considered. The performance increase obtained by the proposed model is more evident when the MCC metric is considered. Note that the MCC metric is an appropriate performance metric when the classes to be predicted are imbalanced, which is the case with most event logs considered in this paper (see Section 4.2). The relative performance increase in PROP with respect to the base models for each dataset and performance metric is reported numerically in Table 2.
Among the base models, GBM emerges as the best-performing one. Teinemaa et al. [7], in their benchmark of outcome-based predictive model, reported XGB to be the superior classifier in this type of predictive problem. However, in their study, they did not consider GBM as a classifier, nor some of the performance metrics considered here, such as MCC and AUCPR. Our results show that the base model GBM generally outperforms XGB.
To better understand the superior performance of the proposed method, Figure 2 shows boxplots of the performance results achieved on all event logs. We can see that PROP is the best-performing model considering the median and the interquartile range on all the datasets considered in the experimental evaluation. Figure 3 shows the correlation plots for the F1-score and MCC performance metrics between the predictive models and all the datasets considered in the evaluation clustered using the performance achieved as a clustering measure. The performance achieved is indicated using a heatmap (the warmer the heatmap, the higher the performance achieved). We can see that the warmer colors are associated with the PROP model. More in detail, while all the classifiers achieve high performance on the BPI11 and BPI15 datasets, PROP shows better performance also on the other datasets, e.g., BPI17 and BPI12, where the performance achieved by the baseline models is normally lower.
Finally, to assess whether the performance differences among the models considered in this evaluation are significant, Figure 4 shows the results of the Nemenyi test for different performance metrics. For each performance metric, the plot first ranks the models from best (on the left-hand side) to worst (on the right-hand side). Then, the models are grouped together if their average ranking falls below the critical distance CD—that is, the minimum distance in the test results indicating that the ranking differences are significant at the chosen level α of significance. While for the performance metrics F1-score, accuracy, and AUC the proposed model PROP is grouped with at least GBM—i.e., the ranking difference among the models, at least GBM, is not statistically significant—the model PROP remains alone and top-ranked for the performance metrics MCC and AUCPR. This indicates that, when considering MCC and AUCPR, not only is the performance achieved by PROP the highest among competing models, but also the performance difference is significant across the datasets considered.
In summary, the proposed method generally outperforms its base models. The performance increase of the proposed model appears to be higher for those datasets where predictive models normally achieve on average a lower performance. The performance increase achieved by the proposed model, particularly with respect to GBM, which emerges as the best-performing base model, is statistically significant when considering performance metrics more appropriate to evaluate the performance on imbalanced classification problems, e.g., MCC and AUCPR.

5. Conclusions

We have presented a stacked ensemble model using strong learners for addressing the problem of outcome-based predictive monitoring. The objective of the paper is to demonstrate that the proposed model outperforms the strong learners XGB, RF, and GBM that we have considered as base learners. The experimental evaluation, conducted on several publicly available event logs, demonstrated that this is the case: the proposed stacked ensemble generally outperforms the strong learners and this effect is more evident when performance metrics more suitable for classification using imbalanced datasets, such as AUCPR and MCC, are considered.
The research presented in this paper has several limitations. First, in the experimental evaluation, we considered only one specific combination of prefix encoding and bucketing. While this has been done to maintain the number of experiments manageable, we cannot assume that the results obtained considering different settings would suggest different conclusions. However, Teinemaa et al. [7] demonstrated in their extensive benchmark that the effect on the performance of different encoding and bucketing methods is lower in magnitude when compared to the choice of classifier. Second, in this work, we did not analyze the results obtained by prefix length. Our objective was to conduct a large number of experiments and to aggregate the results obtained in order to establish, with the support of statistical tests, whether the proposed stacking ensemble had a better overall performance than the baselines. To do so, we had to rely on standard performance measures on aggregated observations; so, we did not consider other measures of performance in outcome-based predictive monitoring, such as earliness or stability of the predictions.
The work presented here can be extended in several ways. First, event log datasets are typically imbalanced. Due to the fact that this might alter the effectiveness of the classification algorithm, it would be interesting to employ undersampling or oversampling approaches in order to better understand the pattern behavior of classification algorithms in those two distinct circumstances. Second, there has been significant progress in applying deep learning models for tabular data, with research often claiming that the models outperform the ensemble model—i.e., XGB—in some cases [30,31]. Additional research is very certainly necessary in this regard, especially to determine if deep learning models perform statistically better on tabular event log data.

Author Contributions

Conceptualization, B.A.T. and M.C.; methodology, B.A.T.; validation, M.C.; investigation, B.A.T.; writing—original draft preparation, B.A.T. and M.C.; writing—review and editing, B.A.T. and M.C.; visualization, B.A.T.; supervision, M.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the NRF Korea project n. 2022R1F1A072843 and the 0000 Project Fund (Project Number 1.220047.01) of UNIST (Ulsan National Institute of Science & Technology).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A

We specify the space of hyperparameter tuning of RF, GBM, and XGB as follows. We kept the n t r e e s to 100 for all classifiers, while other hyperparameters were tuned as follows. The m a x _ d e p t h was searched within the range of 1 to 29; the range for s a m p l e _ r a t e and c o l _ s a m p l e _ r a t e _ t r e e are 0.2 to 1 with an interval of 0.01. The parameter of c o l _ s a m p l e _ r a t e _ l e v e l usually can be a value > 0.0 and 2.0 ; thus, we searched this value from 0.9 to 1.1. We searched the value of m i n _ r o w s as a function of 2 [ 0 , γ ] , where γ is defined as l o g 2 (number of rows of the training set)-1. Next, the hyperparameter searches for n b i n s and n b i n s _ c a t s were specified as 2 [ 4 , 10 ] and 2 [ 4 , 12 ] , respectively. It is required to tune m i n _ s p l i t _ i m p r v as it can help reduce overfitting. In this study, we tuned this parameter by four possible values, i.e., 0, 1 × 10 8 , 1 × 10 6 , and 1 × 10 4 . Lastly, h i s t o g r a m _ t y p e was searched by three possible types, i.e., uniform adaptive, quantiles global, and round robin. On each dataset, the optimal settings for each classifier are shown below.
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF224256-2048na0.59-0.311.00 × 10 6 Quantiles global
GBM2251280.0520480.540.721.070.581.00 × 10 4 Uniform adaptive
XGB227-0.05-0.730.8-0.521.00 × 10 8 -
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF226256-16-0.851.060.341.00 × 10 8 Uniform adaptive
GBM424320.05640.950.71.030.291.00 × 10 4 Uniform adaptive
XGB227-0.05-0.580.78-0.521.00 × 10 4 -
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF4261024-128-0.361.080.721.00 × 10 8 Quantiles global
GBM1215120.055120.50.641.020.651.00 × 10 8 Round robin
XGB212-0.05-0.710.92-0.711.00 × 10 8 -
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF4261024-64-0.71.040.621.00 × 10 4 Quantiles global
GBM41610240.055120.750.50.990.81.00 × 10 6 Uniform adaptive
XGB225-0.05-0.820.42-0.391.00 × 10 6 -
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF4261024-64-0.71.040.621.00 × 10 4 Quantiles global
GBM828160.051280.60.64-0.890Uniform adaptive
XGB812-0.05-0.830.87-0.751.00 × 10 6 -
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF226256-16-0.851.060.341.00 × 10 8 Uniform adaptive
GBM1215120.05320.720.541.080.320Uniform adaptive
XGB117-0.05-0.70.92-0.551.00 × 10 4 -
BPIC 2011
Classifier m i n _ r o w s m a x _ d e p t h n b i n s l e a r n _ r a t e n b i n s _ c a t s c o l _ s a m p l e _ r a t e s a m p l e _ r a t e c o l _ s a m p l e _ r a t e _ l e v e l c o l _ s a m p l e _ r a t e _ t r e e m i n _ s p l i t _ i m p r v h i s t o g r a m _ t y p e
RF827512-256-0.951.070.731.00 × 10 6 Quantiles global
GBM224320.05640.950.71.030.291.00 × 10 4 Uniform adaptive
XGB226-0.05-0.710.92-0.711.00 × 10 8 -

References

  1. Van der Aalst, W.M. Process Mining: Data Science in Action; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
  2. Márquez-Chamorro, A.E.; Resinas, M.; Ruiz-Cortés, A. Predictive monitoring of business processes: A survey. IEEE Trans. Serv. Comput. 2017, 11, 962–977. [Google Scholar] [CrossRef]
  3. Verenich, I.; Dumas, M.; Rosa, M.L.; Maggi, F.M.; Teinemaa, I. Survey and cross-benchmark comparison of remaining time prediction methods in business process monitoring. ACM Trans. Intell. Syst. Technol. 2019, 10, 1–34. [Google Scholar] [CrossRef]
  4. Evermann, J.; Rehse, J.R.; Fettke, P. Predicting process behaviour using deep learning. Decis. Support Syst. 2017, 100, 129–140. [Google Scholar] [CrossRef]
  5. Tama, B.A.; Comuzzi, M. An empirical comparison of classification techniques for next event prediction using business process event logs. Exp. Syst. Appl. 2019, 129, 233–245. [Google Scholar] [CrossRef]
  6. Tama, B.A.; Comuzzi, M.; Ko, J. An Empirical Investigation of Different Classifiers, Encoding, and Ensemble Schemes for Next Event Prediction Using Business Process Event Logs. ACM Trans. Intell. Syst. Technol. 2020, 11, 1–34. [Google Scholar] [CrossRef]
  7. Teinemaa, I.; Dumas, M.; Rosa, M.L.; Maggi, F.M. Outcome-oriented predictive process monitoring: Review and benchmark. ACM Trans. Knowl. Discov. Data 2019, 13, 17. [Google Scholar] [CrossRef]
  8. Senderovich, A.; Di Francescomarino, C.; Maggi, F.M. From knowledge-driven to data-driven inter-case feature encoding in predictive process monitoring. Inf. Syst. 2019, 84, 255–264. [Google Scholar] [CrossRef]
  9. Kim, J.; Comuzzi, M.; Dumas, M.; Maggi, F.M.; Teinemaa, I. Encoding resource experience for predictive process monitoring. Decis. Support Syst. 2022, 153, 113669. [Google Scholar] [CrossRef]
  10. Van der Laan, M.J.; Polley, E.C.; Hubbard, A.E. Super learner. Stat. Appl. Genet. Mol. Biol. 2007, 6. [Google Scholar] [CrossRef] [PubMed]
  11. Di Francescomarino, C.; Ghidini, C.; Maggi, F.M.; Milani, F. Predictive Process Monitoring Methods: Which One Suits Me Best? In Proceedings of the International Conference on Business Process Management, Sydney, Australia, 9–14 September 2018; Springer: Cham, Switzerland, 2018; pp. 462–479. [Google Scholar]
  12. Santoso, A. Specification-driven multi-perspective predictive business process monitoring. In Enterprise, Business-Process and Information Systems Modeling; Springer: Cham, Switzerland, 2018; pp. 97–113. [Google Scholar]
  13. Verenich, I.; Dumas, M.; La Rosa, M.; Nguyen, H. Predicting process performance: A white-box approach based on process models. J. Softw. Evol. Process 2019, 31, e2170. [Google Scholar] [CrossRef]
  14. Galanti, R.; Coma-Puig, B.; de Leoni, M.; Carmona, J.; Navarin, N. Explainable predictive process monitoring. In Proceedings of the 2020 2nd International Conference on Process Mining (ICPM), Padua, Italy, 5–8 October 2020; pp. 1–8. [Google Scholar]
  15. Rama-Maneiro, E.; Vidal, J.C.; Lama, M. Deep learning for predictive business process monitoring: Review and benchmark. arXiv 2020, arXiv:2009.13251. [Google Scholar] [CrossRef]
  16. Neu, D.A.; Lahann, J.; Fettke, P. A systematic literature review on state-of-the-art deep learning methods for process prediction. Artif. Intell. Rev. 2021, 55, 801–827. [Google Scholar] [CrossRef]
  17. Kratsch, W.; Manderscheid, J.; Röglinger, M.; Seyfried, J. Machine learning in business process monitoring: A comparison of deep learning and classical approaches used for outcome prediction. Bus. Inf. Syst. Eng. 2020, 63, 261–276. [Google Scholar] [CrossRef]
  18. Metzger, A.; Neubauer, A.; Bohn, P.; Pohl, K. Proactive Process Adaptation Using Deep Learning Ensembles. In Proceedings of the International Conference on Advanced Information Systems Engineering, Rome, Italy, 3–7 June 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 547–562. [Google Scholar]
  19. Wang, J.; Yu, D.; Liu, C.; Sun, X. Outcome-oriented predictive process monitoring with attention-based bidirectional LSTM neural networks. In Proceedings of the 2019 IEEE International Conference on Web Services (ICWS), Milan, Italy, 8–13 June 2019; pp. 360–367. [Google Scholar]
  20. Folino, F.; Folino, G.; Guarascio, M.; Pontieri, L. Learning effective neural nets for outcome prediction from partially labelled log data. In Proceedings of the 2019 IEEE 31st International Conference on Tools with Artificial Intelligence (ICTAI), Portland, OR, USA, 4–6 November 2019; pp. 1396–1400. [Google Scholar]
  21. Pasquadibisceglie, V.; Appice, A.; Castellano, G.; Malerba, D.; Modugno, G. ORANGE: Outcome-oriented predictive process monitoring based on image encoding and cnns. IEEE Access 2020, 8, 184073–184086. [Google Scholar] [CrossRef]
  22. Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
  23. Bergstra, J.; Bengio, Y. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 2012, 13, 281–305. [Google Scholar]
  24. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  25. Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  26. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  27. Di Francescomarino, C.; Dumas, M.; Maggi, F.M.; Teinemaa, I. Clustering-based predictive process monitoring. IEEE Trans. Serv. Comput. 2016, 12, 896–909. [Google Scholar] [CrossRef]
  28. Saito, T.; Rehmsmeier, M. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PLoS ONE 2015, 10, e0118432. [Google Scholar] [CrossRef] [PubMed]
  29. Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
  30. Shwartz-Ziv, R.; Armon, A. Tabular data: Deep learning is not all you need. Inf. Fusion 2022, 81, 84–90. [Google Scholar] [CrossRef]
  31. Borisov, V.; Leemann, T.; Seßler, K.; Haug, J.; Pawelczyk, M.; Kasneci, G. Deep neural networks and tabular data: A survey. arXiv 2021, arXiv:2110.01889. [Google Scholar]
Figure 1. (ag) Performance comparison between the proposed ensemble (e.g., PROP) and its base learners across different datasets in terms of mean accuracy, AUC, AUCPR, F1, F2, and MCC scores.
Figure 1. (ag) Performance comparison between the proposed ensemble (e.g., PROP) and its base learners across different datasets in terms of mean accuracy, AUC, AUCPR, F1, F2, and MCC scores.
Electronics 11 02548 g001
Figure 2. (af) Boxplots (center, median; box, interquartile range (IQR); whiskers, 1.5 × IQR) illustrating the average performance distribution of classification algorithms.
Figure 2. (af) Boxplots (center, median; box, interquartile range (IQR); whiskers, 1.5 × IQR) illustrating the average performance distribution of classification algorithms.
Electronics 11 02548 g002
Figure 3. (a,b) Correlation plot denoting clustered solution spaces. The color represents the performance score (e.g., low, yellow; high, red) of classifier, ranging from 0.4 to 1. For each performance metric, the datasets are roughly grouped based on the classification algorithms.
Figure 3. (a,b) Correlation plot denoting clustered solution spaces. The color represents the performance score (e.g., low, yellow; high, red) of classifier, ranging from 0.4 to 1. For each performance metric, the datasets are roughly grouped based on the classification algorithms.
Electronics 11 02548 g003
Figure 4. Critical difference plot using Nemenyi Test (significant level, α = 0.05 ) across performance metrics: F1 score (a), F2 score (b), Matthew correlation coefficient (MCC) score (c), accuracy score (d), area under ROC curve (AUC) score (e), and area under precision–recall curve (AUCPR) score (f).
Figure 4. Critical difference plot using Nemenyi Test (significant level, α = 0.05 ) across performance metrics: F1 score (a), F2 score (b), Matthew correlation coefficient (MCC) score (c), accuracy score (d), area under ROC curve (AUC) score (e), and area under precision–recall curve (AUCPR) score (f).
Electronics 11 02548 g004
Table 1. The characteristics of event log datasets employed in this study. A severely imbalanced dataset occurs when I R < 0.5 .
Table 1. The characteristics of event log datasets employed in this study. A severely imbalanced dataset occurs when I R < 0.5 .
DatasetEvent Log ς T ς + ς #Input Features I R
BPIC 2011 b p i 11 . f 1 67,48053,84113,639220.253
b p i 11 . f 2 149,73050,05199,679220.502
b p i 11 . f 3 70,54662,9817565220.120
b p i 11 . f 4 93,06571,30121,764220.305
BPIC 2012 b p i 12 . a c 186,69386,94899,745140.872
b p i 12 . c c 186,693129,89056,803140.437
b p i 12 . c d 186,693156,54830,145140.193
BPIC 2013 b p i 13.2 33,86130,4523409230.112
b p i 13.3 35,54832,1403408340.106
b p i 13.4 730138933408450.875
b p i 13.5 30,91627,5083408560.124
b p i 13 w u p . 2 65,53061,6593871230.063
b p i 13 w u p . 3 65,53061,6593871340.063
b p i 13 w u p . 4 65,52961,6593871450.063
b p i 13 w u p . 5 65,52861,6593871560.063
b p i 13 p p . 2 61,13559,6191516450.025
b p i 13 p p . 3 61,13559,6191516670.025
BPIC 2015 b p i 15.1 28,77520,6358140310.394
b p i 15.2 41,20231,6539549310.302
b p i 15.3 57,48843,66713,821320.317
b p i 15.4 24,23419,8784356290.219
b p i 15.5 54,56234,94819,614330.561
BPIC 2017 b p i 17 . a 1,198,366665,182533,184250.802
b p i 17 . c 1,198,366677,682520,684250.768
b p i 17 . r 1,198,3661,053,868144,498250.137
Table 2. Percentage improvement that the proposed model offers over base classifiers.
Table 2. Percentage improvement that the proposed model offers over base classifiers.
DatasetEvent LogsClassifier M Classifier N F1F2MCCAccuracyAUCPRAUC
BPIC11bpi11.f1PROPRF0.0560.0410.2750.0890.0020.005
XGB0.1210.0990.5970.1930.0010.002
GBM−0.014−0.007−0.069−0.022−0.001−0.005
bpi11.f2PROPRF0.1050.0940.1580.0700.0010.000
XGB0.0350.0200.0530.0230.0000.000
GBM0.0000.0000.0000.0000.0000.000
bpi11.f3PROPRF0.0360.0140.3350.064−0.008−0.030
XGB0.0360.0270.3350.064−0.006−0.019
GBM0.0000.0000.0000.000−0.008−0.032
bpi11.f4PROPRF0.0250.0140.1050.0380.0000.000
XGB0.0070.0080.0300.0110.0000.000
GBM0.0000.0000.0000.0000.0000.000
BPIC12bpi12.acPROPRF0.8130.3981.5700.6970.3610.308
XGB3.8902.8228.1863.6202.1581.903
GBM0.5750.3991.1490.505−0.1800.511
bpi12.ccPROPRF0.2460.2440.9110.3200.0960.217
XGB2.1761.5098.2333.0841.0412.161
GBM0.3510.1712.5510.6020.1620.624
bpi12.dcPROPRF0.1960.1331.4310.3370.0650.270
XGB0.8610.4786.6511.4990.1920.947
GBM0.3510.1712.5510.6020.1620.624
BPIC13bpi13.2PROPRF4.0772.7263.3150.2092.3410.922
XGB3.4653.3790.7220.1773.8040.752
GBM3.9152.2272.5250.2741.3301.144
bpi13.3PROPRF5.4523.0146.2890.6915.5221.365
XGB3.8740.5864.6630.1991.7540.487
GBM0.2872.3661.5510.0161.0740.232
bpi13.4PROPRF6.6534.8687.4730.0405.1440.968
XGB6.1574.5497.7120.0246.8090.746
GBM0.475−0.326−0.8850.0083.4540.221
bpi13.5PROPRF4.7072.7023.450−0.1051.5550.679
XGB3.1683.5134.121−0.0651.3790.248
GBM−0.3241.980−1.0520.0162.4240.226
bpi13wup.2PROPRF−0.4942.4281.012−0.185−4.9760.517
XGB6.1656.1418.6390.0893.7120.844
GBM−1.5061.368−0.431−0.024−2.0880.094
bpi13wup.3PROPRF1.9432.7283.542−0.201−1.0040.653
XGB6.6125.7658.9890.0245.3280.834
GBM0.2872.3661.5510.0161.0740.232
bpi13wup.4PROPRF6.6534.8687.4730.0405.1440.968
XGB6.1574.5497.7120.0246.8090.746
GBM0.475−0.326−0.8850.0083.4540.221
bpi13wup.5PROPRF4.7072.7023.450−0.1051.5550.679
XGB3.1683.5134.121−0.0651.3790.248
GBM−0.3241.980−1.0520.0162.4240.226
bpi13pp.2PROPRF4.1361.0871.3150.0753.6540.498
XGB6.7135.2043.5690.1086.8040.249
GBM0.4591.010−2.629−0.058−1.4590.043
bpi13pp.3PROPRF1.8501.5441.012−0.0331.7400.469
XGB4.3802.6573.2990.0252.6890.128
GBM1.4470.6500.067−0.0580.6970.320
BPIC15bpi15.1PROPRF0.073−0.0050.2710.1070.0030.007
XGB0.0390.0090.1190.0530.0070.015
GBM0.0500.0580.1790.0710.0040.011
bpi15.2PROPRF0.0800.0610.3510.123−0.001−0.003
XGB−0.024−0.006−0.102−0.0370.000−0.001
GBM0.0640.0100.2800.0990.0010.002
bpi15.3PROPRF0.0700.0410.2850.1050.0020.006
XGB−0.011−0.039−0.051−0.017−0.001−0.002
GBM0.0690.0710.2860.1050.0010.004
bpi15.4PROPRF−0.001−0.0310.0090.000−0.021−0.026
XGB0.0520.0830.2810.085−0.020−0.024
GBM0.0250.0410.1520.042−0.017−0.014
bpi15.5PROPRF0.2750.1790.7620.3500.0050.009
XGB0.0510.0140.1390.0640.0010.001
GBM0.0290.0320.0800.0370.0030.004
BPIC17bpi17aPROPRF2.8081.6316.6083.0970.5400.679
XGB0.9670.9892.2441.0690.2060.262
GBM0.9230.4812.1761.0340.1590.192
bpi17cPROPRF2.7691.4866.3272.9860.5360.639
XGB0.9280.8461.9750.9600.2010.223
GBM0.8840.3381.9060.9260.1550.153
bpi17rPROPRF0.2880.2052.5610.5100.0160.117
XGB0.2270.1432.0110.4030.0150.112
GBM0.1580.0811.4040.2810.0150.106
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tama, B.A.; Comuzzi, M. Leveraging a Heterogeneous Ensemble Learning for Outcome-Based Predictive Monitoring Using Business Process Event Logs. Electronics 2022, 11, 2548. https://doi.org/10.3390/electronics11162548

AMA Style

Tama BA, Comuzzi M. Leveraging a Heterogeneous Ensemble Learning for Outcome-Based Predictive Monitoring Using Business Process Event Logs. Electronics. 2022; 11(16):2548. https://doi.org/10.3390/electronics11162548

Chicago/Turabian Style

Tama, Bayu Adhi, and Marco Comuzzi. 2022. "Leveraging a Heterogeneous Ensemble Learning for Outcome-Based Predictive Monitoring Using Business Process Event Logs" Electronics 11, no. 16: 2548. https://doi.org/10.3390/electronics11162548

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop