X-Detect: Explainable Adversarial Patch Detection for Object Detectors in Retail

Object detection models, which are widely used in various domains (such as retail), have been shown to be vulnerable to adversarial attacks. Existing methods for detecting adversarial attacks on object detectors have had difficulty detecting new real-life attacks. We present X-Detect, a novel adversarial patch detector that can: i) detect adversarial samples in real time, allowing the defender to take preventive action; ii) provide explanations for the alerts raised to support the defender's decision-making process, and iii) handle unfamiliar threats in the form of new attacks. Given a new scene, X-Detect uses an ensemble of explainable-by-design detectors that utilize object extraction, scene manipulation, and feature transformation techniques to determine whether an alert needs to be raised. X-Detect was evaluated in both the physical and digital space using five different attack scenarios (including adaptive attacks) and the COCO dataset and our new Superstore dataset. The physical evaluation was performed using a smart shopping cart setup in real-world settings and included 17 adversarial patch attacks recorded in 1,700 adversarial videos. The results showed that X-Detect outperforms the state-of-the-art methods in distinguishing between benign and adversarial scenes for all attack scenarios while maintaining a 0% FPR (no false alarms) and providing actionable explanations for the alerts raised. A demo is available.


Introduction
Object detection (OD) models are commonly used in computer vision in various industries, including manufacturing [44], autonomous driving [25], security surveillance [24], and retail [35,17,11,4]. Since existing retail automated checkout solutions often necessitate extensive changes to a store's facilities (as in Amazon's Just Walk-Out [19]), a simpler solution based on a removable plugin placed on the shopping cart and an OD model was proposed (both in the literature [37,41] and by industry). 2 Since adversarial attacks can compromise an OD model's integrity [30,45,53,21], such smart shopping carts can also be at risk. For example, an adversarial customer could place an adversarial patch on a high-cost product (such as an expensive bottle of wine), which would cause the OD model to misclassify it as a cheaper product (such as a carton of milk). Due to the expansion in retail thefts [14,16], such attacks would result in a loss of revenue for a retail chain and compromise the solution's trustworthiness, if not addressed. 3 The detection of such physical patch attacks in the retail setting is a challenging task since: i) the defense mechanism (adversarial detector) should raise an alert in an actionable time frame to prevent the theft from taking place; ii) the adversarial detector should provide explanations when an adversarial alert is raised to prevent a retail chain from falsely accusing a customer of being a thief [2]; and iii) the adversarial detector should be capable of detecting unfamiliar threats (i.e., adversarial patches in different shapes, colors, and textures) [51]. Several methods for detecting adversarial patches for OD models have been proposed [12,22,29], however, none of them provide an adequate solution that addresses all of the above requirements.
We present X-Detect, a novel adversarial detector for OD models which is suitable for real-life settings. Given a new scene, X-Detect identifies whether the scene contains an adversarial patch, i.e., whether an alert needs to be raised (an illustration of its use is presented in Figure 1). X-Detect consists of two base-detectors: the object extraction detector (OED) and the scene processing detector (SPD), each of which alters the attacker's assumed attack environment in order to detect the presence of an adversarial patch. The OED changes the attacker's assumed machine learning (ML) task, i.e., object detection, to image classification by utilizing an object extraction model and a customized k-nearest neighbors (KNN) classifier. The SPD changes the attacker's assumed OD pipeline by adding a scene preprocessing step to limit the effect of the adversarial patch. The two base detectors can be used as an ensemble or on their own.
We empirically evaluated X-Detect in both the digital and physical space using five different attack scenarios (including adaptive attacks) that varied in terms of the attacker's level of knowledge. The evaluation was performed using a variety of OD algorithms, including Faster R-CNN [39], YOLO [38], Cascade R-CNN [5], and Grid R-CNN. In the digital evaluation, we digitally placed adversarial patches on objects in the Common Objects in Context (COCO) dataset, which is a benchmark dataset in the OD domain. In the physical evaluation, we physically placed 17 adversarial patches on objects (products found in retail stores) and created more than 1,700 adversarial videos that were recorded in a real smart shopping cart setup. For this evaluation, we created the Superstore dataset, which is an OD dataset tailored to the retail domain. Our evaluation results show that X-Detect can successfully identify digital and physical adversarial patches, outperform state-of-the-art methods (Segment & Complete and Ad-YOLO) without interfering with the detection in benign scenes (scenes without an adversarial patch), and provide explanations for its output, without being exposed to adversarial examples. The main contributions of this paper are as follows: • To the best of our knowledge, X-Detect is the first adversarial detector capable of providing explainable adversarial detection for OD models; moreover, X-Detect can be employed in any user-oriented domain where explanations are needed, and specifically in retail.
• X-Detect is a model-agnostic solution. By requiring only black-box access to the target model, it can be used for adversarial detection for any OD algorithm.
• X-Detect supports the addition of new classes without any additional training, which is essential in retail where new items are added to the inventory on a daily basis.
• The resources created in this research can be used by the research community to further investigate adversarial attacks in the retail domain, i.e., the Superstore dataset and the corresponding adversarial videos will be publicly available when the paper is published.

Background
Adversarial samples are real data samples that have been perturbed by an attacker to influence an ML model's prediction [10,8]. Numerous digital and physical adversarial attacks have been proposed [18,34,7,3,21], and recent studies have shown that such attacks, in the form of adversarial patches, can also target OD models [45,53,21,42]. Since OD models are used for real-world tasks, those patches can even deceive the model in environments with high uncertainty [27,53,21]. An adversary that crafts an adversarial patch against an OD model may have one of three goals: i) to prevent the OD model from detecting the presence of an object in a scene, i.e., perform a disappearance (hidden) attack [43,45,21,53]; ii) to allow the OD model to successfully identify the object in a scene (correct bounding box) but cause it to be classified as a different object, i.e., perform a creation attack [52]; or iii) to cause the OD model to detect a non-existent object in a scene, i.e., perform an illusion attack [30,27]. Examples of adversarial patch attacks that target OD models (which were used in X-Detect's evaluation) include the DPatch [30] and the illusion attack of Lee & Kotler [27], which craft a targeted adversarial patch with minimal changes to the bounding box.
To successfully detect adversarial patches, X-Detect utilizes four computer vision related techniques: 1) Object extraction [26] -the task of detecting and delineating the objects in a given scene, i.e., 'cropping' the object presented in the scene and erasing its background; 2) Arbitrary style transfer [23] -an image manipulation technique that extracts the style "characteristics" from a given style image and blends them into a given input image; 3) Scale invariant feature transform (SIFT) [31] -an explainable image matching technique that extracts a set of key points that represent the "essence" of an image, allowing the comparison of different images. The key points are selected by examining their surrounding pixels' gradients after applying varied levels of blurring; and 4) The use of class prototypes [40] -an explainability technique that identifies data samples that best represent a class, i.e., the samples that are most related to a given class [36].

Related Work
While adversarial detection in image classification has been extensively researched [49,1,13,15,50], only a few studies have been performed in the OD field. Moreover, it has been shown that adversarial patch attacks and detection techniques suited for image classification cannot be successfully transferred to the OD domain [30,32,33]. Adversarial detection for OD can be divided into two categories: patch detection based on adversarial training (AT) and patch detection based on inconsistency comparison (IC). The former is performed by adding adversarial samples to the OD model's training process so that the model becomes familiarized with the "adversarial" class, which will improve the detection rate [12,29,48]. AT detectors can be applied both externally (the detector operates separately from the OD model) or internally (the detector is incorporated in the architecture of the OD model). One example of an AT detector applied externally is Segment & Complete (SAC) [29]; SAC detects adversarial patches by training a separate segmentation model, which is used to detect and erase the adversarial patches from the scene. In contrast, Ad-YOLO [22] is an internal AT detector, which detects adversarial patches by adding them to the model's training set as an additional "adversarial" class. The main limitation of AT detectors is that the detector's effectiveness is correlated with the attributes of the patches presented in the training set [51], i.e., the detector will have a lower detection rate for new patches. In addition, none of the existing detectors provide a sufficient explanation for the alert raised.
The IC detection approach examines the similarity of the target model's prediction and the prediction of another ML model, which is referred to as the predictor. Any inconsistency between the two models' predictions will trigger an adversarial alert [47]. DetectorGuard [47] is an example of a method employing IC; in this case, an instance segmentation model serves as the predictor. De-tectorGuard assumes that a patch cannot mislead an OD model and instance segmentation model simultaneously. However, by relying on the output differences of those models, DetectorGuard is only effective against disappearance attacks and is not suitable for creation or illusion attacks that do not significantly alter the object's shape. Given this limitation, DetectorGuard is not a valid solution for the detection of such attacks, which are likely to be used in smart shopping cart systems.

The Method
X-Detect's design is based on the assumption that the attacker crafts the adversarial patch for a specific attack environment (the target task is OD, and the input is preprocessed in a specific way), i.e., any change in the attack environment will harm the patch's capabilities. X-Detect starts by locating and classifying the main object in a given scene (which is the object most likely to be attacked) by using two explainable-by-design base detectors that change the attack environment. If there is a disagreement between the classification of X-Detect and the target model, X-Detect will raise an adversarial alert. In this section, we introduce X-Detect's components and structure (as illustrated in Figure 2). X-Detect consists of two base detectors, the object extraction detector (OED) and the scene processing detector (SPD), each of which utilizes different scene manipulation techniques to neutralize the adversarial patch's effect on an object's classification. These components can be used separately (by comparing the selected base detector's classification to the target model's classification) or as an ensemble to benefit from the advantages of both base detectors (by aggregating the outputs of both base detectors and comparing the result to the target model's classification).
The following notation is used: Let F be an OD model and s be an input scene. Let

Object Extraction Detector
The OED receives an input scene s and outputs its classification for the main object in s. First, the OED uses an object extraction model to eliminate the background noise from the main object in s. As opposed to OD models, object extraction models use segmentation techniques that focus on the object's shape rather than on other properties. Patch attacks on OD models change the object's classification without changing the object's outlines [29], therefore the patch will not affect the object extraction model's output. Additionally, by using object extraction, the OED changes the assumed object surrounding by eliminating the scene's background, which may affect the final classification. Then, the output of the object extraction model is classified by the prototype-KNN classifier -a customized KNN model. KNN is an explainable-by-design algorithm, which, for a given sample, returns the k closest samples according to a predefined proximity metric and uses majority voting to classify it. Specifically, the prototype-KNN chooses the k closest neighbors from a predefined set of prototype samples P from every class. By changing the ML task to classification, the assumed attack environment has been changed. In addition, using prototypes as the neighbors guarantees that the set of neighbors will properly represent the different classes. The prototype-KNN proximity metric is based on the number of identical visual features shared by the two objects examined. The visual features are extracted using SIFT [31], and the object's unique characteristics are represented by a set of key points. The class of the prototype that has the highest number of matching key points with the examined object is selected. The OED's functionality is presented in equation 1: where P KN N is the prototype-KNN, p i ∈ P is a prototype sample, and OE is the object extraction model. The OED is considered explainable-by-design: i) SIFT produces an explainable output that visually connects each matching point in the two scenes; and ii) the K neighbors used for the classification can explain the decision of the prototype-KNN.

Scene Processing Detector
As the OED, the SPD receives an input scene s and outputs its classification for the main object in s. First, the SPD applies multiple image processing techniques on s. Then it feeds the processed scenes to the target model and aggregates the results into a classification. The image processing techniques are applied to change the assumed OD pipeline by adding a preprocessing step, which would limit the patch's effect on the target model [45,21]. The effect of the image processing techniques applied on the target model needs to be considered, i.e., a technique that harms the target model's performance on (a) Physical use case setup.

Threat Model Defender Settings
Model's Weights Attacker's knowledge about the attack scenarios examined.
where m ∈ SM represents an image processing technique, and the main object's classification probability is aggregated from each processed image output by Arg. The SPD is considered explainableby-design, since it provides explanations for its alerts, i.e., every alert raised is accompanied by the processed scenes, which can be viewed as explanations-by-examples, i.e., samples that explain why X-Detect's prediction changed.

Datasets
The following two datasets were used in the evaluation: Common Objects in Context (COCO) 2017 [28] -an OD benchmark containing 80 object classes and over 120K labeled images.
Superstore -a dataset that we created, which is customized for the retail domain and the smart shopping cart use case. The Superstore dataset contains 2,200 images (1,600 for training and 600 for testing), which are evenly distributed across 20 superstore products (classes). Each image is annotated with a bounding box, the product's classification, and additional visual annotations (more information can be found in the supplementary material). The Superstore dataset's main advantages are that all of the images were captured by cameras in a real smart cart setup (described in Section 5.2) and it is highly diverse, i.e., the dataset can serve as a high-quality training set for related tasks.

Evaluation Space
X-Detect was evaluated in two attack spaces: digital and physical. In the digital space, X-Detect was evaluated under a digital attack with the COCO dataset. In this use case, we used open-source pretrained OD models from the MMDetection framework's model zoo [9]. We used 100 samples related to classes that are relevant to the smart shopping cart use case in the COCO dataset to craft two adversarial patches using the DPatch attack [30], each of which corresponds to a different target class -"Banana" or "Apple." To create the adversarial samples, we placed the patches on 100 additional benign samples from four classes ("Banana," "Apple," "Orange," and "Pizza"). The test set used to evaluate X-Detect consisted of the 100 benign samples and their 100 adversarial samples. We note that the adversarial patches were not placed on the corresponding benign class scenes, i.e., an "apple" patch was not placed on "apple" scenes and a "banana" patch was not placed on "banana" scenes.
In the physical space, X-Detect was evaluated in a real-world setup in which physical attacks were performed on real products from the Superstore dataset. For this evaluation, we designed a smart shopping cart using a shopping cart, two identical web cameras, and a personal computer (illustrated in Figure 3a). In this setup, the frames captured by the cameras were passed to a remote GPU server that stored the target OD model. To craft the adversarial patches, we used samples of cheap products from the Superstore test set and divided them into two equally distributed data folds (15 samples from each class). Each of the adversarial patches was crafted using one of the two folds. The patches were crafted using the DPatch [30] and Lee & Kotler [27] attacks (additional information is presented in supplementary material). In total, we crafted 17 adversarial patches and recorded 1,700 adversarial videos, in which expensive products containing the adversarial patch were placed in the smart shopping cart. The test set used to evaluate X-Detect consisted of the adversarial videos along with an additional equal numbered of benign videos. Those videos are available at. 4 Table 3b presents the five attack scenarios evaluated and their corresponding threat models. The attack scenarios can be categorized into two groups: non-adaptive attacks and adaptive attacks. The former are attacks where the attacker threat model does not include knowledge about the defense approach, i.e., knowledge about the detection method used by the defender. X-Detect was evaluated on four non-adaptive attack scenarios that differ with regard to the attacker's knowledge: white-box (complete knowledge of the target model), gray-box (no knowledge of the target model's parameters), model-specific (knowledge on the ML algorithm used), and model agnostic (no knowledge on the target model). The patches used in these scenarios were crafted using the white-box scenario's target model; then they were used to simulate the attacker's knowledge in other scenarios (each scenario targets different models). We also carried out adaptive attacks [6], i.e., the attacker threat model includes knowledge on the detection technique used and its parameters. We designed three adaptive attacks that are based on the (LKpatch) [27] attack of Lee & Kotler, which is presented in equation 3:

Attack Scenarios
where P is the adversarial patch, L is the target model's loss function, and t are the transformations applied during training. Each adaptive attack is designed according to X-Detect's base detectors' settings: i) using just the OED, ii) using just the SPD, and iii) using an ensemble of the two. where p target is the target class prototype and norm is a normalization function.
To adjust the LKpatch attack to incorporate the second setup (ii), we added the image processing techniques used by the SPD to the transformations in the expectation over transformation functionality t. The updated loss function is presented in equation 5: where sp are the image processing techniques used. To adjust the existing LKpatch attack to incorporate the last setup (iii), we combined the two adaptive attacks described above, by adding OE SIF T to the P ASP loss. Additional information regarding the adaptive attacks' intuition and implementation can be found in the supplementary material.

Experimental Settings
All of the experiments were performed on the CentOS Linux 7 (Core) operating system with an NVIDIA GeForce RTX 2080 Ti graphics card with 24GB of memory. The code used in the experiments was written using Python 3.8.2, PyTorch 1.10.1, and NumPy 1.21.4 packages. We used different target models, depending on the attack space and scenario in question. In the digital space, only the white-box, model-specific, and model-agnostic attack scenarios were evaluated, since they are the most informative for this evaluation. In the white-box scenario, the Faster R-CNN model with a ResNet-50-FPN backbone [39,20] in PyTorch implementation was used. In the model-specific scenario, three Faster R-CNN models were used -two with a ResNet-50-FPN backbone in Caffe implementation, in each one a different regression loss (IOU) was used, and the third used ResNet-101-FPN as a backbone. In the model-agnostic scenario, the Cascade R-CNN [5] model and Grid R-CNN were used, each of which had a ResNet-50-FPN backbone. In the physical space evaluation, In the model-agnostic scenario, a Cascade R-CNN model, Cascade RPN model [46], and YOLOv3 model [38] were trained with the seed 42. The trained models are available at. 5 In the attacks performed, the learning rate was reduced automatically (on plateau), the batch size was set at one, and the patch was placed on the main object. The patch size was set at 120*120 pixels in the digital space and 100*100 pixels in the physical space. The transformations used in the LKpatch attack were brightness (between [0.8,1.6]) and random rotations. Our code is also available at. 6 All of the adversarial samples (i.e., scenes with an adversarial patch) were designed to meet the following three requirements i) the target OD model still identifies an object in the scene (there is a bounding box); ii) the attack does not change the output's bounding box drastically; and iii) the attack changes the object classification to a predefined target class or group of classes (e.g., 'cheap' products).
In the experiments, X-Detect's components were set as follows. The OED was initialized with 10 prototypes from each class. To ensure that the prototypes were representative, we extracted the object associated with the prototype class. The number of neighbors used by the prototype-KNN was set to seven. The scene processing detector used the following image processing techniques: blur (six in the digital space and 12 in the physical space), sharpen, random noise (0.35 in the physical space), darkness (0.1), and arbitrary style transfer. The arbitrary style transfer technique used the evaluated scene served as both the input and the style image, i.e., the input scene was slightly changed.
The base detectors were evaluated separately and in the form of an ensemble. Two types of ensembles were evaluated: i) a majority voting (MV) ensemble, which sums the probabilities for each class and returns the class that received the highest sum; and ii) a 2-tier ensemble, which first applies the SPD, and if an alert is raised, the scene is passed to the OED. To properly evaluate X-Detect, we implemented two state-of-the-art adversarial detectors, Ad-YOLO [22] and SAC [29] (Section 3), and compared their results to those of X-Detect. Both detectors were implemented according to their paper's details, and additional information is provided in the supplementary material.

Experimental Results
The attacks used in the digital space (see Section 5.2) reduced the mean average precision (mAP) in all attack scenarios by 60%, i.e., the OD models' performance was substantially degraded. In X-Detect evaluation, only the successful adversarial samples were used. There were 95%, 35%, and 52% successful adversarial samples respectively in the white-box, model-specific, and model-agnostic attack scenarios. Further details are available in the supplementary material. Figure 4 shows the digital space evaluation results (performed on the COCO dataset) for X-Detect, Ad-YOLO, and SAC detection for the white-box, model-specific, and model-agnostic attack scenarios. The figure presents the detection accuracy (DA), true positive rate (TPR), and true negative rate (TNR) of X-Detect components, Ad-YOLO, and SAC. In the model-specific and model-agnostic attack scenarios, the results presented are the means obtained by each detector (see Section 5.3) with a standard deviation of 0.034 and 0.017 respectively. In the figure, we can see that X-Detect outperformed Ad-YOLO  and SAC on all of the performance metrics. X-Detect obtained the highest TPR with the OED; the highest TNR by the 2-tier ensemble (along with SAC); and the highest DA by the 2-tier ensemble.
The attacks used in the physical evaluation (described in Section 5.2) decreased the models' performance substantially. In X-Detect evaluation, only the successful adversarial samples were used. There were 80%, 80%, 74%, 33%, and 28% successful adversarial samples in the adaptive, white-box, gray-box, model-specific, and model-agnostic scenarios respectively (additional information can be found in the supplementary material). Table 1 presents the results of the physical space evaluation (performed on the Superstore dataset) for X-Detect, Ad-YOLO, and SAC for the white-box, gray-box, model-specific, and model-agnostic scenarios on six metrics: DA, TPR, TNR, false positive rate (FPR), false negative rate (FNR), and inference time (the detector's runtime for a single video). The results of the gray-box, model-specific, and model-agnostic scenarios are the means obtained by each detector with a standard deviation of 0.032, 0.044, and 0.024 respectively. The results in the table show that X-Detect in its different settings outperformed Ad-YOLO and SAC with the exception of the inference time metric (∼0.4 seconds), which is discussed further in the supplementary material. X-Detect achieved the highest TPR by the OE detector; the highest TNR (along with SAC) by the 2-tier and MV ensembles; and the highest DA by the 2-tier ensemble.
We also evaluated X-Detect's performance in the adaptive attack scenario. Adaptive attacks require the addition of a component that enables the attack to evade the known defense mechanism, making them more challenging to perform (see Section 5.3). When using X-Detect's parameters in the adaptive attacks' optimization process, the attacks did not converge, i.e., the attacks failed to produce a patch that deceived the target model while evading the defense method. Therefore, to evaluate X-Detect when faced with a proper attack, the parameters used were relaxed (see Section 5.4 and the supplementary material). Table 2 presents X-Detect's TPR and TNR in the adaptive attack scenario.
The results indicate that while these adaptive patches partially succeeded in deceiving the target model, they did not succeed in evading X-Detect, i.e., X-Detect successfully detected most of the adversarial samples while maintaining a TPR of at least 92%. This shows that evading X-Detect is not possible, even when relaxing some of its parameters, and without doing so, the adaptive attacks did not converge. Therefore, X-Detect is successful in detecting adversarial patches in the adaptive attack scenario (more information can be found in the supplementary material). Figure 5 presents examples of X-Detect 's explainable outputs, which can be used to justify the alert raised.

Discussion
During the evaluation, several interesting attack behaviors were observed. When analyzing the attack's behavior in the crafting phase, we observed that the location of the adversarial patch influenced the attack's success, i.e., when the adversarial patch was placed on the attacked object, the attack's success rate improved. Furthermore, in analyzing the attacks in the physical space, we observed their sensitivity to the attacker's behavior, such as the angle, speed of item insertion, and patch size.
When analyzing the attacks in the different attack scenarios, we observed that as the knowledge available for the attacker decreased, the number of successful adversarial samples also decreased. The reason for this is that the shift between attack scenarios relies on the patch's transferability. Therefore, the adversarial samples that succeed in the most restricted attack scenario (model-agnostic) can be considered the most robust. Those samples would be harder to detect, as reflected in the performance of X-Detect in its different settings. However, all the results show the detection of a single attack; in the shopping cart use case, an attacker would likely try to steal as many items as possible. In the case of a repeated attack, X-Detect's ability to expose the attacker would improve exponentially.
In the evaluation, four detection approaches were used: only the OED, only the SPD, an MV ensemble, and a 2-tier ensemble, however the question of which approach is the most suitable for the adversarial detection task remains unanswered. Each of the approaches showed different strengths and weaknesses: i) the OED approach managed to detect most of the adversarial samples (high TPR) yet raised alerts on benign samples (high FPR); ii) the SPD approach detected fewer adversarial samples than the OED (lower TPR) yet raised fewer alerts on benign samples (lower FPR); iii) the MV ensemble reduced the gap between the two base-detectors' performance (higher TPR and lower FPR) yet had a longer inference time; and iv) the 2-tier ensemble reduced the MV ensemble's inference time and improved the identification of benign samples (higher TNR and lower FPR) yet detected fewer adversarial samples (lower TPR). Therefore, the selection of the best approach depends on the use case. In the retail domain, it can be assumed that: i) most customers would not use an adversarial patch to shoplift; ii) wrongly accusing a customer of shoplifting would result in a dissatisfied customer, unlikely to return to the store, and harm the company's reputation; iii) short inference time is vital in real-time applications like the smart shopping cart. Therefore, a company that places more value on the customer experience would prefer the 2-tier ensemble approach, while a company that prioritizes revenue above all would prefer the OED or MV ensemble approach.

Conclusion and Future Work
In this paper, we presented X-Detect, a novel adversarial detector for OD models which is suitable for real-life settings. In contrast to existing methods, X-Detect is capable of: i) identifying adversarial samples in near real-time; ii) providing explanations for the alerts raised; and iii) handling new attacks. Our evaluation in the digital and physical spaces, which was performed using a smart shopping cart setup, demonstrated that X-Detect outperforms existing methods in the task of distinguishing between benign and adversarial scenes in the four attack scenarios examined while maintaining a 0% FPR. Furthermore, we demonstrated X-Detect effectively under adaptive attacks. Future work may include applying X-Detect in different domains (security, autonomous vehicles, etc.) and expanding it for the detection of other kinds of attacks, such as physical backdoor attacks against ODs.