NFAD: fixing anomaly detection using normalizing flows

Anomaly detection is a challenging task that frequently arises in practically all areas of industry and science, from fraud detection and data quality monitoring to finding rare cases of diseases and searching for new physics. Most of the conventional approaches to anomaly detection, such as one-class SVM and Robust Auto-Encoder, are one-class classification methods, i.e., focus on separating normal data from the rest of the space. Such methods are based on the assumption of separability of normal and anomalous classes, and subsequently do not take into account any available samples of anomalies. Nonetheless, in practical settings, some anomalous samples are often available; however, usually in amounts far lower than required for a balanced classification task, and the separability assumption might not always hold. This leads to an important task—incorporating known anomalous samples into training procedures of anomaly detection models. In this work, we propose a novel model-agnostic training procedure to address this task. We reformulate one-class classification as a binary classification problem with normal data being distinguished from pseudo-anomalous samples. The pseudo-anomalous samples are drawn from low-density regions of a normalizing flow model by feeding tails of the latent distribution into the model. Such an approach allows to easily include known anomalies into the training process of an arbitrary classifier. We demonstrate that our approach shows comparable performance on one-class problems, and, most importantly, achieves comparable or superior results on tasks with variable amounts of known anomalies.


INTRODUCTION
The anomaly detection (AD) problem is one of the important tasks in the analysis of real-world data. Possible applications range from the data-quality certification (for example, Borisyak et al., 2017) to finding the rare specific cases of the diseases in medicine (Spence, Parra & Sajda, 2001). The technique can be also used in credit card fraud detection (Aleskerov, Freisleben & Rao, 1997), complex systems failure predictions (Xu & Li, 2013), and novelty detection in time series data (Schmidt & Simic, 2019).
Formally, AD is a classification problem with a representative set of normal samples and a small, non-representative or empty set of anomalous examples. Such a setting makes conventional binary classification methods to be overfitted and not to be robust w.r.t. novel anomalies (Görnitz et al., 2012). In contrast, conventional one-class classification (OC-) 1 Source code is available at https://gitlab. com/lambda-hse/nfad. 2 Moreover, anomalies typically cover a much larger ''phase space'' than normal samples, thus, generic models (e.g., a deep neural network with fully connected layers) might require significantly more anomalous examples than normal ones. methods (Breunig et al., 2000;Liu, Ting & Zhou, 2012) are typically robust against all types of outliers. However, OC-methods do not take into account known anomalies which often result to suboptimal performance in cases when normal and anomalous classes are not perfectly separable (Campos et al., 2016;Pang, Shen & Van den Hengel, 2019). The research in the area addresses several challenges (Pang et al., 2021) that lie in the field of increasing precision, generalizing to unknown anomaly classes, and tackling multi-dimensional data. Several reviews of classical (Zimek, Schubert & Kriegel, 2012;Aggarwal, 2016;Boukerche, Zheng & Alfandi, 2020;Belhadi et al., 2020) and deep-learning methods (Pang et al., 2021) were published that describe the literature in detail. With the advancement of the neural generative modeling, methods based on generative adversarial networks (Schlegl et al., 2017), variational autoencoders (Xu et al., 2018), and normalizing flows (Pathak, 2019) are introduced for the AD task.
We propose 1 addressing the class-imbalanced classification task by modifying the learning procedure that effectively makes anomaly detection methods suitable for a twoclass classification. Our approach relies on imbalanced dataset augmentation by surrogate anomalies sampled from normalizing flow-based generative models.

PROBLEM STATEMENT
Classical AD methods consider anomalies a priori significantly different from the normal samples (Aggarwal, 2016). In practice, while such samples are, indeed, most likely to be anomalous, often some anomalies might not be distinguishable from normal samples (Hunziker et al., 2017;Pol et al., 2019;Borisyak et al., 2017). This provides a strong motivation to include known anomalous samples into the training procedure to improve the performance of the model on these ambiguous samples. Technically, this leads to a binary classification problem which is typically solved by minimizing cross-entropy loss function L BCE : (1) where: f is a arbitrary model (e.g., a neural network), C + and C − denote normal and anomalous classes. In this case, the solution f * approaches the optimal Bayesian classifier: Notice that f * implicitly relies on the estimation of the probability densities P(x|C + ) and P(x|C − ). A good estimation of these densities is possible only when a sufficiently large and representative sample is available for each class. In practical settings, this assumption certainly holds for the normal class. However, the anomalous dataset is rarely large or representative, often consisting of only a few samples or covering only a portion of all possible anomaly types. 2 With only a small number of examples (or a non-representative sample) to estimate the second term of Eq. (2), L BCE effectively does not depend on f (x) in x ∈ suppC − \ suppC + , which leads to solutions with arbitrary predictions in the area, i.e., to classifiers that are not robust to novel anomalies.
One-class classifiers avoid this problem by aiming to explicitly separate the normal class from the rest of the space (Liu, Ting & Zhou, 2008;Scholkopf & Smola, 2018). As discussed above, this approach, however, ignores available anomalous samples, potentially leading to incorrect predictions on ambiguous samples.
Recently, semi-supervised AD algorithms like 1 + ε-classification method (Borisyak et al., 2020), Deep Semi-supervised AD method (Ruff et al., 2019), Feature Encoding with AutoEncoders for Weakly-supervised Anomaly Detection (Zhou et al., 2021) and Deep Weakly-supervised Anomaly Detection  were put forward. They aim to combine the main properties of both unsupervised (one-class) and supervised (binary classification) approaches: proper posterior probability estimations of binary classification and robustness against novel anomalies of one-class classification.
In this work, we propose a method that extends the 1+ε-classification method (Borisyak et al., 2020) scheme by exploiting normalizing flows. The method is based on sampling the surrogate anomalies to augment the existing anomalies dataset using advanced techniques.

NORMALIZING FLOWS
The normalizing flows (Rezende & Mohamed, 2015b) generative model aims to fit the exact probability distribution of data. It represents a set of invertible transformations {f i (·;θ i )} with parameters θ i , to obtain a bijection between the given distribution of training samples and some domain distribution with known probability density function(PDF). However, in the case of non-trivial bijection z 0 ↔ z k , the distribution density at the final point z k (training sample) differs from the density at point z 0 (domain). This is due to the fact that each non-trivial transformation f i (·;θ i ) changes the infinitesimal volume at some points. Thus, the task is not only to find a flow of invertible transformations {f i (·;θ i )}, but also to know how the distribution density is changed at each point after each transformation f i (·;θ i ).
Consider the multivariate transformation of variable z i = f i (z i−1 ;θ i ) with parameters θ i for i > 0. Then, Jacobian for a given transformation f i (z i−1 ;θ i ) at given point z i−1 has the following form: Then, the distribution density at point z i after the transformation f i of point z i−1 can be written in a following common way: where detJ (f i |z i−1 ) is a determinant of the Jacobian matrix J (f i |z i−1 ) (Rezende & Mohamed, 2015).

Thus, given a flow of invertible transformations
and domain distribution of z 0 with known p.d.f. p(z 0 ), we obtain likelihood p(x) for each object x = z N . This way, the parameters {θ i } N i=1 of NF model f can be fitted by explicit maximizing the likelihood p(x) for training objects x ∈ X . In practice, Monte-Carlo estimate of logp(X ) = log x∈X p(x) = x∈X logp(x) is optimized, which is an equivalent optimization procedure. Also, the likelihood p(X ) can be used as a metric of how well the NF model f fits given data X .
The main bottleneck of that scheme is located in that detJ (·|·) computation, which is O(n 3 ) in a common case (n is the dimension of variable z). In order to deal with that problem, specific normalizing flows with specific families of transformations f are used, for which Jacobian computation is much faster (Rezende & Mohamed, 2015;Papamakarios, Pavlakou & Murray, 2017;Kingma et al., 2016;Chen et al., 2019).

ALGORITHM
The suggested NF-based AD method (NFAD) is a two-step procedure. In the first step, we train normalizing flow on normal samples to sample new surrogate anomalies. Here, we assume that anomalies differ from normal samples, and its likelihood p NF (x − |C + ) is less than likelihood of normal samples p NF (x + |C + ). In the second step, we sample new surrogate anomalies from tails of normal samples distribution using NF and train an arbitrary binary classifier on normal samples and a mixture of real and sampled surrogate anomalies.

Step 1. Training normalizing flow
We train normalizing flow on normal samples. It can be trained by a standard for normalizing flows scheme of maximization the log-likelihood (see 'Normalizing flows'): . After NF for sampling is trained, it can be used to sample new anomalies. To produce new anomalies, we sample z from tails of normal domain distribution, where p-value of tails is a hyperparameter (see Fig. 1).
Here, we assume that test time anomalies are either represented in the given anomalous training set or novelties w.r.t. normal class. In other words, p(x|C + ) of novelties x must be relatively small. Nevertheless, p(x) obtained by NF might be drastically different from the corresponding domain point likelihood p(z) because of non-unit Jacobian of NF transformations Eq. (8). Such distribution density distortion is illustrated in Fig. 2 and makes the proposed sampling scheme of anomalies to be incomplete. Because of such distortion, some points in the tails of the domain can correspond to normal samples, and some points in the body of domain distribution can correspond to anomalies. To fix it, we propose Jacobian regularization of normalizing flows (Fig. 2) by introducing extra regularization term. It penalizes the model for non-unit Jacobian: where λ denotes the regularization hyperparameter. We estimate the regularization term L J in Eq. (9) by direct sampling of z from the domain distribution N (0,I ) to cover the whole sampling space. The theorem below proofs that any level of expected distortion can be obtained with such a regularization: Since ∃θ 0 : f (·;θ 0 ) = I , p f (f (z;θ 0 )) = p(z) ∀z ∼ , the term = c min < c 0 (minimum exists since negative log likelihood is lower bounded by 0). Then ∀λ: In this work, we use Neural Spline Flows (NSF, Durkan et al., 2019) and Inverse (IAF, Kingma et al., 2016) Autoregressive Flows for tabular anomalies sampling. We also use Residual Flow (ResFlow, Chen et al., 2019) for anomalies sampling on image datasets. All the flows satisfy the conditions of Theorem 4.1. The proposed algorithms are called 'nfad-nsf', 'nfad-iaf' and 'nfad-resflow' respectively.
Step 2. Training classifier Once normalizing flow for anomaly sampling is trained, a classifier can be trained on normal samples and a mixture of real and surrogate anomalies sampled from NF (Fig. 3).
During the research, we used binary cross-entropy objective Eq. (2) to train the classifier. We do not focus on classifier configuration since any classification model can be used at this step.

Final algorithm
The final scheme of the algorithm is shown in Fig. 3 accompanied with pseudocode Algorithm 1. All training details are given in Appendix A.
Input : Normal samples C + , anomaly samples C − (may be empty), p-value of tail p p p, number of epochs for NF E NF , number of epochs for classifier E CLF Output: Anomalies classifier g φ for epoch from 1 to E NF do sample minibatch of normal samples X + ∼ C + ; calculate NF bijection between points on gaussian Z + and normal samples X + : Z + = f −1 (X + ;θ ); update parameters θ of NF f with the following gradient ascend: sample surrogate anomaliesX using NF:X = f (Z ;θ); sample minibatch of normal samples: X + ∼ C + ; sample minibatch of anomalies (if C − is not empty): X − ∼ C − ; update parameters φ of classifier g φ with the following gradient descent: Algorithm 1: NFAD algorithm
As the proposed method targets problems that are intermediate between one-class and two-class problems, we compare the proposed approach with the following algorithms: • one-class methods: Robust AutoEncoder (RAE-OC, (Chalapathy, Krishna Menon & Chawla, 2017)) and Deep SVDD (Ruff et al., 2018).
• conventional two-class classification; • semi-supervised methods: dimensionality reduction by an Deep AutoEncoder followed by two-class classification (DAE), Feature Encoding with AutoEncoders for Weaklysupervised Anomaly Detection (FEAWAD, (Zhou et al., 2021)), DevNet (Pang, Shen & We compare the algorithms using the ROC AUC metric to avoid unnecessary optimization for threshold-dependent metrics like accuracy, precision, or F1. Tables 1, 2 and 3 show the experimental results on tabular data. Tables 4, 5 and 6 show the experimental results on image data. Also, some of the aforementioned algorithms like DevNet are applicable only to tabular data and not reported on image data. In these tables, columns represent tasks with a varying number of negative samples presented in the training set: numbers in the header indicate either number of classes that form negative class (in case of KDD, CIFAR, OMNIGLOT and MNIST datasets) or a number of negative samples used (HIGGS and SUSY); 'one-class' denotes the absence of known anomalous samples. As one-class algorithms do not take into account negative samples, their results are identical for the tasks with any number of known anomalies. The best score in each column is highlighted in bold font.

DISCUSSION
Our tests suggest that the best results are achieved when the normal class distribution has single mode and convex borders. These requirements are data-specific and can not be effectively addressed in our algorithm. The effects can be seen in Fig. 2, where two modes result in the ''bridge'' in the reconstructed standard class shape, and the non-convexity of the borders ends up in the worse separation line description.  Also, hyperparameters like Jacobian regularization λ and tail size p must be accurately chosen. This fact is illustrated in Figs. 1 and 2, where we show the different samples quality and the performance of our algorithm for different hyperparameters values. To find suitable values, some heuristics can be used. For instance, optimal tail location p can be estimated based on known anomalies from the training dataset, whereas Jacobian regularization λ in the NF training process can be linearly scheduled like KL factor in (Hasan et al., 2020).
Our experiments suggest the main reason for the proposed method to have lower performance with respect to others on image data is a tendency of normalizing flows to estimate the likelihood of images by its local features instead of common semantics, as described by Kirichenko, Izmailov & Wilson (2020). We also find that the overfitting of the classifier must be carefully monitored and addressed, as this might lead to the deterioration of the algorithm.
However, the results obtained on HIGGS, KDD, SUSY and CIFAR-10 datasets demonstrated the big potential of the proposed method over previous AD algorithms. With the advancement of new ways of NF application to images, the results are expected to improve for this class of datasets as well. In particular, we believe our method to be widely applicable in the industrial environment, where the task of AD can take advantage of both tabular and image-like datasets. It also should be emphasized that unlike state-of-the-art AD algorithms Zhou et al., 2021;Ruff et al., 2019), we propose a model-agnostic data augmentation algorithm that does not modify AD model training scheme and architecture. It enriches the input training anomalies set requiring only normal samples in the augmentation process (Fig. 3).
parameters. As a classifier, a dense classifier with three layers is used for tabular data (see Figure 4) and 333 built-in ResFlow classification head is used for images. Tabular data classifier is trained 10 epochs with 334 batch size 100 using AdamW (Loshchilov and Hutter, 2017)

CONCLUSION
In this work, we present a new model-agnostic anomaly detection training scheme that deals efficiently with hard-to-address problems both by one-class or two-class methods. The solution combines the best features of one-class and two-class approaches. In contrast to one-class approaches, the proposed method makes the classifier effectively utilize any number of known anomalous examples, but, unlike conventional two-class classification, does not require an extensive number of anomalous samples. The proposed algorithm significantly outperforms the existing anomaly detection algorithms in most realistic anomaly detection cases. This approach is especially beneficial for anomaly detection problems, in which anomalous data is non-representative, or might drift over time.
The proposed method is fast, stable and flexible both in terms of training and inference stages; unlike previous methods, any classifier can be used in the scheme with any number of anomalies in the training dataset. Such a universal augmentation scheme opens wide prospects for further anomaly detection study and makes it possible to use any classifier on any kind of data. Also, the results on datasets with images are improvable with new techniques of normalizing flows become available.

APPENDIX A. TRAIN AND IMPLEMENTATION DETAILS
All the code is implemented using the PyTorch (Paszke et al., 2019) framework. For augmentation, Resflow (Chen et al., 2019), NSF (Durkan et al., 2019) and IAF (Kingma et al., 2016) are trained with default parameters. As a classifier, a dense classifier with three layers is used for tabular data (see Fig. 4) and built-in ResFlow classification head is used for images. Tabular data classifier is trained 10 epochs with batch size 100 using AdamW (Loshchilov & Hutter, 2017) optimizer with default PyTorch parameters. For image data, ResFlow classification head is trained 8 epochs with batch size 40 using Adam (Kingma & Ba, 2014) optimizer with default PyTorch parameters.

ADDITIONAL INFORMATION AND DECLARATIONS Funding
The research leading to these results has received funding from Russian Science Foundation under grant agreement no. 19-71-30020. The research was also supported through