Abstract

The existence and persistence of financial statement fraud (FSF) are detrimental to the financial health of global capital markets. A number of detective and predictive methods have been used to prevent, detect, and correct FSF, but their practicability has always been a big challenge for researchers and auditors, as they do not address real-world problems. In this paper, both supervised and unsupervised approaches are employed for analysing the financial data obtained from China’s stock market in detecting FSF. The variables used in this paper are 18 financial datasets, representing a fraud triangle. Additionally, this study examined the properties of five widely used supervised approaches, namely, multi-layer feed forward neural network (MFFNN), probabilistic neural network (PNN), support vector machine (SVM), multinomial log-linear model (MLM), and discriminant analysis (DA), applied in different real-life situations. The empirical results show that MFFNN yields the best classification results in detection of fraudulent data presented in financial statement. The outcomes of this study can be applied to different types of financial statement datasets, as they present a practical way for constructing predictive models using a combination of supervised and unsupervised approaches.

1. Introduction

The current business environment is experiencing an upsurge in financial accounting fraud. As a result, financial accounting fraud detection has become an emerging topic for business practitioners, industries, and academic research [1]. The high-profile financial frauds revealed in large companies such as Enron, Lucent, WorldCom, and Satyam over the last decade emphasize the importance of detecting and reporting financial accounting fraud [2].

The Association of Certified Fraud Examiners (ACFE) classifies occupational fraud into three primary categories, namely, asset misappropriation, corruption, and financial statement fraud [3]. In this study, the focus is on financial statement fraud (FSF), which is reportedly the costliest type of fraud, although, comparatively, it occurs at a lower frequency (Figure 1).

Fraud occurs when perpetrators-in-disguise “cook the books” by the intentional misstatement or manipulation of financial data. The users of financial statements may be investors, creditors, lenders, shareholders, pensioners, and other market participants [4]. Financial fraud is threatening the market increasingly for participants and the gravity of the problem is rising continuously. The effective detection of accounting fraud, however, remains a complex task for accounting professionals [5]. Traditional auditors fail to cope with emerging accounting frauds for many reasons, such as the lack of the required data mining knowledge, experience and expertise due to the infrequency of financial frauds, and the efforts made by other concerned people at finance departments, such as Chief Financial Officers (CFOs), financial managers, and accountants, for concealment and deception [6]. In addition, FST is usually committed by a smart team of knowledgeable perpetrators (e.g., top executives and auditors) with a well-planned scheme who are capable of masking their deceit [7]. The need for additional automatic data analysis procedures and tools for the effective detection of falsified financial statements is therefore more pronounced today than ever. Nowadays, a series of standards, such as SAS 82, are put forth by accounting and audit professionals in order to improve the auditors’ performance in detecting material misstatement in financial data.

Financial ratios can be mined for examining the key characteristics of financial frauds. These ratios are calculable based on the values taken from financial statements and can consolidate the detection of fraudulent materials presented in financial statements by quantifying many aspects of businesses. As an integral part of the financial statement analysis, these ratios can reveal the status of the receivables and bad debts, whether the business is carrying excess debt or inventory, whether the operation expenses are high, and whether the company assets are being properly used for generating income [8]. The liquidity, safety, profitability, and efficiency ratios are important indicators of financial ratios.

Due to the advancement of data mining techniques, sophisticated approaches to knowledge discovery are required for extracting previously unknown information from data. Supervised and unsupervised learning techniques are two widely used knowledge discovery techniques in data mining [9]. For example, clustering algorithms find group data points with natural similarities [1015].

1.1. The Novelties of This Study Can Be Mentioned as the Following

(1)The novelty in application: the algorithm designed to conduct this study can be applied to any financial statement dataset with respect to the 18 variables used in this study. It is because the results of the study in the Research Methodology and Results section and also Conclusion part present the best approach to financial statement fraud detection. In other words, the result of this study enables any individual investor who wishes to buy stocks from the stock market and wants to know if the annual financial statement issued by companies reflects their real financial circumstance or it tends to defraud the investors by fraudulently manipulated data. Therefore, this study equips the stock market players with an applicable/accessible tool that provides them decision support. The model as well as the results of this study is investor friendly.(2)The novelty in theory: generally speaking, financial statement frauds detection approaches can be classified in two main families, which correspond to the two families of machine learning algorithms: supervised and unsupervised ones. In the former family (known as signature-based predictive models), past data are labeled as fraudulent or non-fraudulent; for instance, based on auditor’s judgment, the algorithms then learn over this data, to create a model that is applied to new instances appearing in the system. On the other hand, unsupervised techniques are based on the automatic detection of patterns and attributions that are considered “normal” for a given user (individual investor in our study), for then detecting data that are not consistent with such patterns, as fraudulent data are expected to depart from the normal data pattern. Both families have their own advantages and disadvantages. In this study both supervised and unsupervised techniques were employed, not simultaneously but sequently for different characteristics that they possess. By doing so the validity of the outcomes rises up.(3)Reducing the probability of data misplacement in clustering by adding the “suspicious” group: for the first time in the area of the financial statement fraud detection, this study suggests 3 segments for the outcomes of the algorithm. They are, namely, “fraudulent,” “non-fraudulent,” and “suspicious.” To refine the data that share some degree of attribution with both fraudulent and non-fraudulent cases and produce high quality clusters with higher intraclass similarity and low interclass similarity, we labeled an additional segment called “suspicious” to reduce the possibility of data misplacement. The data fallen into the “suspicious” group is subjected to further evaluation by machine and human intelligence. Each of these three clusters undergoes a different procedure in this study. They build the cornerstone of the hypothesis testing by contributing different fractions in the simulation tests presented in this research.(4)A major drawback with the existing works in the literature has been addressed effectively in this study: all the previous works have used balanced datasets that contain an equal size of fraudulent and non-fraudulent cases in machine learning. This draws an unrealistic pattern that cannot be inductive. Because it is always been observed that the proportion of the fraudulent cases is just a small fraction of the whole dataset. In our statistical analysis we conducted various possible scenarios that measure the performance of our model in different real-life situations.

In general, this study optimizes and refines the existing approaches to application of machine learning in financial statement fraud detection. This study finally benefits the academician and business practitioners in the strategic decision making regarding their investment in the stock market. Also the auditors of the financial statement can apply the suggested techniques in this research in their auditing procedures. The methodology and procedure used in this paper can easily be extended to other domains of financial crimes. It can be described as novelty in the mode of execution and application.

This research set and achieved the following main objectives:(1)Putting financial data in proper categories using cluster analysis with the help of an auditor.(2)Evaluating the efficiency of 5 supervised predictive models with respect to precision and recall values.(3)Using the outcomes to answer several secondary objectives of the research, designed as statistical hypothesis testing.

In other words, a compatible cluster analysis is conducted to automatically partition the data, and the process of labeling the clusters is then performed with the help of an auditor. Later, the performance and learning rate of 5 widely used supervised predictive models are evaluated. For this purpose, certain fractions of fraudulent companies have been added to various sample sizes to study the behaviors of the models in different real-life situations. The primary beneficiaries of this research are interested investors in stock market, auditors involved in uncovering FSF, and emerging companies carefully observing their financial statements for profiting in the market [8].

It is worth mentioning that the imbalance data distribution used in this study does not hinder the learning task [16, 17]. Depending on the problem, design, and the objectives of the research, one or a combination of several methods (e.g., using the right evaluation metrics, resampling with different ratios, over-/undersampling, generating synthetic samples, using different algorithms, using penalized models, etc.) can be applied for a better result. However, each approach has its own pros and cons. It has been reported that the error rate caused by imbalanced class distribution decreases when the number of examples of the minority class is representative which means the positive class examples can be better learned [18].

2. Literature Review

Auditing procedures have proven to have many deficiencies for detection of fraudulent financial reporting. It is simply because they are not designed in that way. In an organization, the manager is the one who is morally responsible and accountable for detection of fraudulent financial data. But in reality majority of financial statement frauds are committed with the awareness or consent of management. Any failure in detection of misstated financial statement can severely harm the credibility of the audit profession [19]. Presently, auditing practices have to be conducted in a timely manner to cope with an increasing number and occurrence of financial statement fraud cases. The novel techniques such as data mining claim that it has advanced classification and prediction capabilities and can be employed to facilitate auditors’ role in terms of successfully accomplishing the task of fraud detection [20].

Data mining is a process in which different techniques are used to extract valid data patterns. This step involves choosing the most appropriate data mining method (such as classification, regression, clustering, or association) [21] and choosing the algorithm belonging to one of the previous families. Finally, the chosen algorithm is used to solve the problem by setting the main parameters and validation procedures. In general, data mining methods are divided into two types of predictive and descriptive (e.g., clustering) methods, and the former is divided into two parts: statistical methods and symbolic methods. Statistical methods are known by representing knowledge through mathematical models with their calculations. Symbolic methods display knowledge using symbols and associations, which ultimately produce more interpretable models.

Regression models, artificial neural networks, support vector machines, Bayesian models, etc. fall into the category of statistical methods. Regression models are among the classical models that require a class of equation modeling. Linear, quadratic, and logistic regressions are among the most well-known models in data mining. They impose basic requirements on data and try to use all its features whether useful or not. One of the most powerful mathematical models is artificial neural networks (ANNs), which are suitable for almost all data mining tasks. There are different formulations of neural networks; the most commonly used are learning vector quantization, and multi-layer feed forward neural networks, radial basis function networks, and multi-layer perception. Neural networks are based on the definition of neurons, which are atomic portions that collect their inputs for an output according to an activation function. These models are generally better than other models if the networks are configured properly, but they are not very popular for their complex structures. Support vector machines are based on learning theory and work very well when data is linearly separable. Unlike regression models, these models do not usually require producing interactions among variables, and similar to ANNs, they are robust to noise and outliers.

Predictive methods are mainly attributed to supervised learning. Supervised methods disclose the relationships between input traits and a target attribute in the structure that we call “model.”

Regression and classification problems are both in this category. A supervised scenario works so that a model is first fitted to a training data set and then used to predict unobserved instances. Here, the main goal is to map the inputs to an output whose correct values are determined by a supervisor. On the other hand, there is no such supervisor in unsupervised learning, and only input data is available. Therefore, the purpose is to find regulations, irregularities, similarities, relationships, and associations in the inputs. One of the advantages of the unsupervised learning is that, unlike supervised learning, more sophisticated models can be learned. Since in supervised learning the goal is to find the relation between two sets of observations, as the number of steps increases the difficulty of learning process grows exponentially due to high computational costs, and thus these models cannot be learned deeply. The two famous problems that belong to unsupervised learning are clustering and association rules.

Data mining is widely applied in many domains. Below is a summary of applications of data mining:(i)Data mining applications in finance: money laundering and detection of other financial crimes, classification of customers and target marketing, loan payment prediction, and customer credit analysis.(ii)Data mining application in retail industry: analysis of effectiveness of sales campaign, customer retention and analysis of customer loyalty, product recommendation, and cross referencing of items.(iii)Data mining for the telecommunication industry: fraudulent pattern analysis and the identification of unusual patterns, and mobile telecommunication services.

Numerous studies have sought to detect fraud using innovative approaches. Nigrini and Mittermaier [22] described digital and number tests based on Benford’s law that could be employed by auditors for this purpose. Markou and Sameer [23, 24] reviewed anomaly detection using both statistical and neural network-based approaches. Rezaee [7] elaborated the causes, consequences, and deterrents of financial statement fraud (FSF) incidents and discussed the factors that might increase its likelihood and emphasized the role of corporate governance in financial fraud prevention. In his work, Rezaee introduced five interactive factors (Cooks, Recipes, Incentives, Monitoring, and End-Results [CRIME]) that can influence fraud occurrence, prevention, and detection in the financial domain. Phua et al. [25] categorized, compared, and summarized the algorithms and performance measurement of almost all the studies published before 2005. Patcha and Park [26] and Hodge and Austin [27] presented a comprehensive survey of anomaly detection systems and hybrid intrusion detection systems and discussed various anomaly-based fraud detection techniques.

Chandola et al. [28] discussed the application of various techniques for detecting anomaly and fraud using supervised approaches. Wu et al. [29] used clustering methods in a large imbalance class distribution to produce subclasses within each large cluster with relatively balanced class sizes. They employed signature-based techniques such as SVM for classification tasks and concluded that models can generally outperform auditors in detecting rates without assistance. Nonetheless, newly invented techniques used by perpetrators in the last decade have remained undetected. Sabau et al. [30] reviewed the techniques used for fraud detection in the last ten years and made a ranking to show the most common clustering techniques employed in fraud and anomaly detection.

Zhou and Kapoor [31] examine the effectiveness and limitations of data mining techniques such as regression, decision trees, and neural network and Bayesian networks. They explore a self-adaptive framework based on a response surface model with domain knowledge to detect financial statement fraud. Sharma and Panigrahi [32] propose a data mining framework for detection of financial fraud. Cecchini et al. [33] propose a methodology to aid in detecting fraudulent financial reporting by utilizing only basic and publicly available financial data. Chen and Roco [34] suggest that an artificial intelligence technique turns out to perform quite well in identifying a fraud lawsuit presence, and hence it can be a supportive tool for practitioners.

Ratio analysis is the most conventional approach used by auditors for fraud detection. The problem with this approach, however, is being subjective in the selection of ratios that are likely to indicate fraud [35]. Data mining techniques are for uncovering implicit, previously unknown, actionable knowledge [20]. To date, the use of these techniques has been limited in FSF. The majority of data mining techniques used in this area are supervised techniques such as logistic regression, neural networks, decision trees, and text mining. An even smaller number of studies have employed cluster analysis as a mining technique for categorizing data based on their natural tendencies and similarities. Han et al. [36] presented the critical conversion risks brought into business through IT from the auditors’ point of view. They suggested that firms with higher levels of IT investment are associated with higher audit risks for external auditors and emphasized the importance of enhancing the capabilities of auditors in highly IT-based systems.

Abtahi et al. [37] used Bayesian classifier model to suggest a system in which fraud in future market-trading coins can be detected. The primary labeling of data was done by K-means clustering. In the present study a similar approach is employed. The model could accurately classify 94.55% of the fraudulent cases.

Jan [38] takes 160 companies (including 40 fraudulent companies) to evaluate multiple data mining techniques including ANN and SVM. Also four types of decision trees (classification and regression tree (CART), chi-square automatic interaction detector (CHAID), C5.0, and quick unbiased efficient statistical tree (QUEST)) were used in this study. The results of this study show that the ANN + CART model yields the best classification results, with an accuracy of 90.83% in the detection of financial statements fraud.

Yao et al. [39] propose a hybrid fraudulent financial statement detection model combining the PCA, Xgboost, SVM, RF, DT, ANN, and LR to construct the fraud detection model, and the classification accuracy of each model was compared to determine the optimal model. The study indicated that random forest outperformed the other four methods.

Overall, in the past, expert analysis was used to judge the truthfulness of financial statements. A lot of doubts were raised about the efficacy of the traditional approaches to detect fraudulent financial data as auditors lack the required data mining techniques. That is the reason why traditional expert analysis often fails to identify fraud. Most previous studies use only 1-2 data mining techniques, without comparisons between each other. As for the input variables, existing studies obtain them from publicly available financial reports. The input variables need to be representative enough to reflect all aspects of the company’s financial status. For instance, quick ratio and liquidity ratio reflect the solvency; sales growth rate and EPS represent the profitability; and we can see operating capacity from inventory turnover and turnover of total capital. In addition, some studies investigate linguistic variables from MD&A and uncovered emotional characteristics of the language in financial reports [40]. Hajek and Henriques [40] found that fraudulent financial reports tend to include a higher rate of negative sentiment.

Some points have remained unaddressed in literature on the utilization of data mining for the detection of potential fraudulent financial statement data. Existing algorithms assign cases into two groups only, namely, “fraudulent” and “non-fraudulent,” and this limitation may lead to a significant rate of data misplacements. That is to say, some cases that are truly innocent can fall into the fraudulent group and vice versa. Adding another group as “suspicious” will resolve this error, as it allows the reexamination of cases that fall into somewhere between these two groups using other tools or human intelligence or a combination of both.

Another drawback is that most researchers have used balanced datasets for building their predictive models. These datasets contain an equal size of fraudulent and non-fraudulent cases. The problem with this method is that it does not reflect the real situation in real-world scenarios, since the ratio of fraudulent cases is very small compared to non-fraudulent cases.

In this study, the performance of the algorithm is measured in various situations. That can be a significant help in understanding how algorithms behave and which ones work better than others. Also, various real situations are simulated based on real data, and hypothesis tests are subsequently run to obtain applicable results.

3. Research Methodology and Results

The variables used in this research include 18 financial items (Table 1), which represent the fraud triangle and have been obtained from the study by Ravisankar et al. [8] and reflect liquidity, safety, profitability, and efficiency as indicative of the financial status. The dataset used in this research was obtained from 2659 companies listed in China’s stock market by the end of December 2015. To calculate the C11 and C12 variables, relevant data were obtained from the companies for the year 2014.

After normalizing the variables, a generalized k-means cluster analysis was utilized to automatically extract the six clusters. These clusters were then labeled with the help of an auditor. The outcome of the labeling process was three clusters labeled as “fraudulent,” “suspicious,” and “non-fraudulent” (Table 2). The reason for including the “suspicious” cluster is that the algorithm was unable to quickly and accurately place the companies fallen in this group into either fraudulent or non-fraudulent groups. Consequently, to enhance the total performance of the algorithm, all the suspicious cases will later undergo further investigation.

To design a simulation study matching the real situation, we should note that over 60% of our observations pertained to the “suspicious” cluster (Table 2), which requires reevaluation. As these companies will always undergo further investigations, a fixed proportion of them (60%) should always be contained in all situations in our design. This study has taken sample sizes of 300, 500, and 800 and in each of these samples, various ratios of fraudulent cases, i.e., 1%, 3%, 5%, 7%, and 10%, are included. The aim is to examine and find out whether or not the precision and recall of the predictive models used in this study alter in different situations. Five predictive models have been used in this research, namely, MFFNN [41], PNN [42], SVM [43], MLM [44], and DA [45]. The first three models are powerful mathematical models suitable for almost all data mining tasks, and the last two are the widely used classification methods, which fall in the realm of classical statistics. In the body of this research the mathematic formulas are not presented. Interested readers can follow the mathematic concepts of this research in the reference list.

In this work, clustering process was continued until the reduction in error function (average distances of observation from the canter of clusters in two consecutive iterations) is less than 5%. For instance, if the iteration k+1 has less than 5% improvement, then we determine k as the number of clusters. To measure the distances, Euclidean distance formula was used in this study as follows:

To simulate a real-world situation, different sets of parameters have been assigned to each noted model, and each model is then evaluated 1000 times for each set of parameters. In each iteration, a training sample of size (300, 500, and 800) and a test sample of size are randomly chosen from a total of n=2659 cases, such that each sample contains a given proportion of fraudulent companies (1%, 3%, 5%, 7%, and 10%) and 60% of suspicious companies. It is worth noting that, in each iteration, the training and test samples are independent, so as to avoid overfitting. As each model is evaluated for different sets of parameters in each iteration, the best-fitted model for each iteration is chosen based on the highest values of precision and then recall in the test. This process is depicted in Figure 2. Table 3 shows the confusion matrix used for calculating the performance values, i.e., (1) precision and (2) recall, which are formulated as follows [46]:

Precision can be defined as the proportion of events produced by the models that are correct. Recall is defined as the proportion of events occurring in the domain that are predicted as such by the models. In our case, we have 3 classes and the numbers of events that are correctly situated in their respective classes are both n11 and n33 (Table 3). In other words, since these two classes are of importance, we separate them from the “suspicious” class to calculate their precision and recall values while our classifier is still able to distinguish them as two distinct classes. An application of the use of precision and recall can be found in [46]. As the nature of our dataset shows an imbalance (skewed) dataset, precision and recall are adequate parameters to be utilized for our study [47].

For different sample sizes and proportions of fraudulent companies, Tables 46 report the mean, standard deviation, and minimum and maximum values of the precision and recall for each model after 1000 iterations.

According to the findings shown in Tables 46, for all combinations of sample sizes (300, 500, and 800) and proportions of fraudulent companies, PNN and MFFNN have the highest precision values, while PNN has the lowest recall value. As the sample size increases, the precision value grows accordingly, and further, their deviation becomes smaller. Unlike PNN, however, MFFNN has the best recall value. Meanwhile, the recall values for the PNN approach show a decreasing trend.

According to the finding in Tables 46, for all three sample sizes (300, 500, and 100) and for various inclusions of fraudulent cases in samples, PNN has the highest precision. But taking recall into consideration, PNN has the poorest results. Having considered the value of precision, MFFNN yields in the second position. MFFNN also outperforms all the models in terms of recall value. We observed that sample size is positively associated with the model performances in terms of precision and recall, except for PNN that displays a negative association between the sample size and recall value (it decreases for all proportions except for 1% which shows an unclear trend). So as MFFNN outperforms all the other 4 models in terms of both precision and recall, we select MFFNN as the reference model. To make our findings generalizable and conclusive, we design the hypothesis statistical testing as follows.

For each approach, first, a one-way analysis of variance (ANOVA) was carried out to compare the mean values of precision and recall of different combinations for a given sample size. In other words, for a given sample size, the aim is to see whether the mean values of the performance measures of each approach differ as the rate of fraudulent companies changes in the sample. In the case of significance, the nature of this relationship will be determined by looking at the mean values of the data to find out whether they follow a strictly monotonic trend. The reports are displayed in Table 7.

The mean values of precision and recall of the five approaches are compared for any given sample size and proportion of fraudulent companies using the one-way repeated measures ANOVA. The real goal is to know whether or not the mean values of performance measures differ among all five approaches. Table 8 reports the results of this analysis.

A one-way ANOVA is conducted for any given proportion of fraudulent companies to see whether or not the performance measures of each approach differ for different samples sizes. Table 9 demonstrates the results of this analysis.

According to Table 7, of those significant performance measures, only recall values for MFFNN and DA show a monotonic behavior, which is downward for MFFNN and upward for DA. That is to say, as the rate of fraudulent companies increases in the sample, a decrease is expected in the recall value for MFFNN but an increase for DA.

For all the other significant cases reported in Table 7, no monotonic trends were observed as the rate of fraudulent companies increased in the sample. Therefore, it can be concluded that the proportion of fraudulent companies in this sample has no effects on the precision of the five approaches and only has an impact on the recall values of MFFNN and DA, as described.

The results in Table 8 indicate that, for all combinations of sample size and proportion of fraudulent companies, the performance measures of all five approaches differ significantly. As noted earlier, according to Tables 46, PNN has the highest values of precision compared to the other methods. Nonetheless, this method has small recall values. MFFNN is also the second best in terms of precision values, which are slightly lower than those of PNN. Nevertheless, this gap decreases as sample size increases. The MFFNN approach gives the best recall values among all five methods. It can therefore be concluded that MFFNN performs better than the other approaches in all situations.

Table 9 presents the results of the mean comparisons in each method for different sample sizes. As shown, except for the recall values of the PNN approach in cases where the rate of fraudulent companies is 1%, all the other performance measures are significant. Based on this finding and by looking at their values in Tables 46, it can be inferred that, as sample size increases, the precision value for each approach also increases accordingly. The same interpretation applies to the recall values of all the other approaches, except for the PNN approach. As for PNN, increasing the sample size leads to a reduced recall value. Larger sample sizes are recommended to be examined in order to have more reliable results.

4. Conclusion and Future Research

Financial statement is the mirror of financial status of companies that reflects useful information about the financial health of company. Fraudulent financial reporting is a deliberate misstatement of numbers for the aim of deceiving users of the financial statement. If there is a force from board to achieve a certain amount of earning, and the earned income is linked with management’s bonus, enough motivational stimuli have been created for fraudulent financial reporting. Other types of motivations for conducting fraudulent financial reporting include promotion to a higher position, salary raises, or raise in the share value of the company. Financial statement fraud damages corporate reputation and negatively affects the financial market, nation, and the world economy. Therefore the need for intelligent and novel tools to enable auditors in detecting fraudulent data is so urgent. The present data mining technique could greatly provide a decision support system to mangers in detection of financial statement fraud.

This study used cluster analysis for partitioning the financial statement data into three groups. The results of the cluster analysis could then be used to conduct statistical experiments in line with the research objectives and to examine the precision and recall values of the five supervised methods known as MFFNN, PNN, SVM, MLM, and DA in different situations, as defined in Section 3. The results show that, although the ratio of fraudulent companies in the sample has a significant impact in most cases, this significance does not follow a strictly monotonic form. There are only two methods, i.e., MFFNN and DA, in which the recall values are correlated by changing the ratio of fraudulent companies. Moreover, sample size has a significant impact on both the precision and recall values of all the methods used in this study, such that a larger sample size increases the precision values of all the methods. The same is true for the recall values of all the methods, except for PNN, which shows a strictly decreasing trend. The comparison of the approaches in different situations revealed that the MFFNN approach performs better than all in terms of performance measures. Although this approach has smaller precision values than PNN, this difference is small and negligible if a larger sample size is selected.

The results of this research allow a variety of groups, especially auditors and investors, to carefully select and decide about the companies which they are auditing or in which they are investing. Using these techniques can help auditors save time and cost by utilizing automated tools, and these steps decrease their work load considerably. The present research has simulated real scenarios to show how these techniques can be properly used in auditing and fraud detection. Further studies can be carried out by adding more variables, such as enterprise size or the inclusion of IT systems in the enterprise, which seems to be a determining factor. Comparative empirical studies can also be conducted when a dataset is obtained from other stock markets. Furthermore, the methodology and procedure used in this paper can be extended to other domains for different purposes and not only for fraud detection.

Data Availability

The data (financial statement data) used in this manuscript is taken from China Securities Market and Accounting Research (CSMAR) dataset through http://us.gtadata.com which is also available via WRDS with a proper subscription. The websites to obtain data are as follows: http://us.gtadata.com  https://wrds-web.wharton.upenn.edu/wrds/index.cfm Information regarding the variables, sample population, and methodology is plenty given in Section 3 of the manuscript (Research Methodology and Results).

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the National Natural Sciences Foundation of China under Grants and .