Exploiting Causality Signals in Medical Images: A Pilot Study with Empirical Results

We present a novel technique to discover and exploit weak causal signals directly from images via neural networks for classification purposes. This way, we model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image. Our method consists of a convolutional neural network backbone and a causality-factors extractor module, which computes weights to enhance each feature map according to its causal influence in the scene. We develop different architecture variants and empirically evaluate all the models on two public datasets of prostate MRI images and breast histopathology slides for cancer diagnosis. We study the effectiveness of our module both in fully-supervised and few-shot learning, we assess its addition to existing attention-based solutions, we conduct ablation studies, and investigate the explainability of our models via class activation maps. Our findings show that our lightweight block extracts meaningful information and improves the overall classification, together with producing more robust predictions that focus on relevant parts of the image. That is crucial in medical imaging, where accurate and reliable classifications are essential for effective diagnosis and treatment planning.


Introduction
Automatic diagnosis models from medical data can potentially transform how patients are treated, especially in oncology.They could reduce the need for invasive tests and increase the likelihood of successful outcomes for the most severe cases.In this regard, some successful examples of machine learning (ML) systems for medical diagnosis exist, such as colon cancer diagnosis from gene expression profiling data (Su et al., 2022), diagnosis of neurological diseases using voice data (Wroge et al., 2018), and identification of patients with pulmonary hypertension using electronic health records (Kogan et al., 2023).
In recent years, the concepts of causal inference and causal reasoning have received increasing attention across the Artificial Intelligence (AI) community.This trend began with the very first work of the computer scientist Judea Pearl on Bayesian networks and the mathematical formalization of causality, which enabled the creation of computational systems that can automatically model causality (Pearl, 1985(Pearl, , 2009;;Pearl and Mackenzie, 2018).Today, we have inspiring examples of the integration of causality into the ML community (Luo et al., 2020;Schölkopf, 2022) and the deep learning research (Berrevoets et al., 2023), with extensions to causal representation learning (Schölkopf et al., 2021), causal discovery under distribution shifts (Perry et al., 2022) and with incomplete data (Wang et al., 2020).Unfortunately, this line of research has always had in common the fact that the processed data are tabular, structured, not always real but simulated, and very often accompanied by a priori information about the process that generated them.
Unlike tabular data, when it comes to images, their representation does not include any explicit indications regarding objects or patterns.Instead, individual pixels are used to convey a particular scene visually, and image datasets do not usually provide labels describing the objects' dispositions.Additionally, unlike video frames, from a single image one may not see the dynamics of appearance and change of the objects in the scene.These critical issues could explain why images have been neglected by research on the tabular causal discovery, where instead there are established algorithms (Spirtes and Glymour, 1991;Spirtes et al., 2000;Chickering, 2002).A particular case would be discovering hidden causalities among objects in an image dataset, as suggested by Terziyan and Vitko (2023), who conceive a way to compute possible causal relationships within images.Although the idea is compelling, that work is preliminary, and a thorough investigation of the effectiveness of their method is lacking.
In our work, we intervene in this lack and propose a way to discover and exploit weak causality signals within images without requiring prior knowledge and use them to enhance convolutional neural network (CNN) classifiers.By combining a regular CNN with a causality-extraction module, we propose a new scheme based on feature map enhancement to enable "causality-driven" CNNs.We frame our system as an automatic diagnosis model from medical images since we study the efficacy of the proposed methods with extensive empirical evaluations on a publicly available dataset of prostate MRI images.
Our paper is structured as follows.First, in Sec. 2, we provide the concepts behind the causal signals' interpretation in images.Then, we start Sec. 3 by describing the novelty of our work, namely the methodological framework and the causality modules we introduced.We also illustrate the dataset, the training scheme, and the evaluation details.Later, we present our main results in Sec. 4 and pull the threads in the general discussion of Sec. 5. Lopez-Paz et al. (2017) propose the idea of "causal disposition" as a simple way to understand the hidden causes in images instead of using the methods of do-calculus and causal graphs from Pearl's framework (Pearl, 2009;Pearl and Mackenzie, 2018).In their view, by counting the number (, ) of images in which the causal dispositions of artifacts  and  are such that  disappears if one removes , one can assume that artifact  causes the presence of artifact  when (, ) is greater than the converse (, ).For instance, they argue that the presence of a car causes the presence of a wheel, but not the other way around, because removing the car would make the wheel disappear, but removing the wheel would not make the car disappear.By studying such asymmetries, the authors find the causal direction between pairs of random variables representing features of objects and their contexts in images.Although the causal disposition concept is more primitive than the interventional approach, it could be the only way to proceed with limited a priori information.This concept leads to the intuition that any causal disposition induces a set of asymmetric causal relationships between the artifacts from an image (features, object categories, etc.) that represent (weak) causality signals regarding the real-world scene.A point of contact with machine vision systems would be to automatically infer such asymmetries from an observed image dataset.Terziyan and Vitko (2023) suggest a way to compute estimates for possible causal relationships within images via CNNs.CNNs obtain the essential features required for classification not directly from the pixel representation of the input image but through a series of convolution and pooling operations designed to capture meaningful features from the image.Convolution layers are responsible for summarizing the presence of specific features in the image and generating a set of feature maps accordingly.Pooling consolidates the presence of particular features within groups of neighboring pixels in square-shaped sub-regions of the feature map.When a feature map   contains only non-negative numbers (e.g., thanks to ReLU functions) and is normalized in the interval [0, 1], we can interpret its values as probabilities of that feature to be present in a specific location.For instance,   , is the probability that the feature  is recognized at coordinates , .By assuming that the last convolutional layer outputs and localizes to some extent the object-like features, we may modify the architecture of a CNN such that the  ×  feature maps ( 1 ,  2 , …   ) obtained from that layer got fed into a new module that computes pairwise conditional probabilities of the feature maps.The resulting  ×  map would represent the causality estimates for the features and be called causality map.Given a pair of feature maps   and   and the formulation that connects conditional probability with joint probability,  (  |  ) =  (  ,  )  (  ) , Terziyan and Vitko (2023) suggest to heuristically estimate this quantity by adopting two possible methods, namely Max and Lehmer.The Max method considers the joint probability to be the maximal presence of both features in the image (each one in its location):

Causality signals in images
On the other hand, the Lehmer method entails computing where   ×  is a vector of  4 pairwise multiplications between each element of the two × feature maps, while   is the generalized Lehmer mean function (Bullen, 2003) with parameter , which is an alternative to power means for Once obtained, C can be flattened as well and concatenated to the feature maps (Option cat) or fed to a causality factors extractor (see Figure 2) to implement the Option mulcat.The latter produces a vector of causality factors that weighs the feature maps obtaining a causality-driven version of them, which is then concatenated to the original ones and fed to the classifier.Weighing mode m and causality direction d are two external signals used to tune the functioning of the system.This image is best seen in color.
interpolating between minimum and maximum of a vector  via harmonic mean ( = −2), geometric mean ( = −1), arithmetic mean ( = 0), and contraharmonic mean ( = 1): . Equations 1 and 2 could be used to estimate asymmetric causal relationships between features   and   , since, in general,  (  |  ) ≠  (  |  ).By computing these quantities for every pair  and  of the  feature maps, the  ×  causality map is obtained.We interpret asymmetries in such probability estimates as weak causality signals between features, as they provide some information on the cause-effect of the appearance of a feature in one place of the image, given the presence of another feature within some other places of the image.Accordingly, a feature may be deemed to be the reason for another feature when , and vice versa.In this work, we integrate a regular CNN with a causality-extraction module to explore the features and causal relationships between them extracted during training.The previous work from which we started (Terziyan and Vitko, 2023) is preliminary, and we introduce a new scheme based on feature maps enhancement to enable "causality-driven" CNNs, providing an extensive empirical evaluation of the impact of this new introduction on real data.Our hypothesis is that it would be possible and reasonable to get some weak causality signals from the individual images of some medical datasets without adding primary expert knowledge, and leverage them to guide better the learning phase.Ultimately, a model trained in such a manner would exploit weak causal dispositions of objects in the image scene to distinguish the tumor status of a prostate image.The internals of the causality factors extractor block of Figure 1 given an example causality map.Cyan squares in the causality map indicate whether the probability value of one element is greater than its element opposite the main diagonal.The causes box shows how the causality map is processed row-wise for each feature map: the number of times that feature is a cause of another feature is registered.Similarly, the effects box shows how the causality map is processed column-wise for each feature map.Before being summed element-wise, those two vectors are either passed as they are or the sign of their elements is reversed according to the causality direction d.The obtained vector is rectified and then returned as it is or passed through boolean filtering depending on the weighing mode m.This image is best seen in color.

Embedding causality into CNNs
Usually, a CNN performs image classification based on the final set of (flattened)  ×  ×  feature maps obtained just before the dense layers that constitute the classifier.In the following, we describe how the architecture of such a regular CNN (baseline) might be modified to make the classifier consider the information entailed in the estimated causality map.
Feature concatenation is a basic (yet popular) way to embed additional information in CNNs.Indeed, by concatenating the flattened causality map to the flattened set of feature maps just before the classifier, Terziyan and Vitko (2023) let the CNN learn how these causality estimates influence image classification.That means that in addition to the  ×  ×  features, the fully connected layers of the classifier will now have a  ×  input, and the weights for the corresponding connections (i.e., actual causality influences) will be learned by back-propagation the same way as other neural network parameters.We will call this method the cat (concatenate) option (see the magenta box in Figure 1).
Alternatively, one could enhance or penalize parts of the existing information according to the newly gained one.Our proposition here is a new way to exploit the causality map: this time, it is used to compute a vector of causality factors that multiply (i.e., weighs) the feature maps so that each feature map is strengthened according to its causal influence within the image's scene.After multiplication, the obtained causality-driven version of the feature maps is flattened and concatenated to the flattened original ones, producing a 2 ×  ×  ×  input to the classifier.We will call this method the mulcat (multiply and concatenate) option (see the green box in Figure 1).
At the core of the mulcat option stands the causality factors extractor module, which yields the vector of weights needed to multiply the feature maps (see Figure 2).The main idea here is to look for asymmetries between elements opposite the main diagonal of the causality map, as they represent conditional asymmetries entailing possible causeeffect relationships.Indeed, some features may be more often found on the left side of the arrow (i.e.,  →) than on the right side (i.e., →  ).Accordingly, the 2D feature map is processed in both row-wise and column-wise manners.In the former case, we register the number of times each feature map   was found to cause another feature map   , that is  (  |  ) >  (  |  ).This way, we obtain a vector of values that quantify how much those feature maps can be deemed "causes".Conversely, in the column-wise processing, we register the number of times each feature map   was found to be caused by another feature map   , obtaining a vector of values that quantify how much the feature maps can be deemed "effects".At this point, we propose two variants to the functioning of the model.We allow an external signal d to represent the causality direction of analysis, which can be either causes or effects.When d = causes, the vector of causes (obtained row-wise) is not altered, while the sign is changed to the elements of the effects vector (obtained column-wise).Hence, as those two vectors enter a summation point, the difference between causes and effects is obtained as the weight vector.On the other hand, when d = effects, the vector of effects is not altered, while it is to the vector of causes that the sign is changed.Therefore, the difference between effects and causes is obtained at the summation point.Eventually, the obtained weight vector is rectified to set to zero eventual negative elements.
In addition, we conceive two variants of the model controlled by another external signal m, that represents the weighing mode and can be one of: • full.The vector of non-negative causality factors is left at its full count, being returned as it is.As a result of this choice, the model weighs features more according to their causal importance (a feature that is cause 10 times more than another receives 10 times more weight).
• bool.The factors undergo boolean thresholding where all the non-zero factors are assigned a new weight of 1 and 0 otherwise.As a result, this choice is more conservative and assigns all features that are most often causes the same weight.
In the following sections, we describe the data used for our empirical evaluations, the different types of model architectures we utilized, and the implementation details of the training process.

Dataset
To validate the proposed methods, we utilized the publicly available 1500-acquisition dataset from the PI-CAI challenge (Saha et al., 2023).This dataset comprises multi-parametric MRI (mpMRI) acquisitions of the prostate, from which we selected only T2-weighted (T2w) images.Within this cohort of patients and respective scans, there were cases that hadn't any tumor (i.e., they had no biopsy examination) and cases that had cancer lesions.For each of the latter, the dataset contained biopsy reports expressing the severity as Gleason Score (GS).In anatomopathology, a GS of 1 to 5 is assigned to the two most common patterns in the biopsy specimen based on the cancer severity.The two grades are then added together to determine the GS, which can assume all the combinations of scores from "1+1" to "5+5".Additionally, the dataset included the assigned GS's group affiliation, defined by the International Society of Urological Pathology (ISUP) (Egevad et al., 2016), ranging from 1 to 5, which provides the tumor severity information at a higher granularity level.In this study, we included both cancerous and no-tumor patients.From the former case, we only considered lesions with GS ≥ 3 + 4 (ISUP ≥ 2) and only selected the slices containing lesions by exploiting the expert annotations of the disease provided in the dataset.For the latter case, we selected all the available slices.In the end, we obtained a total number of 4159 images (from 545 patients), with a balanced distribution over the two classes: 2079 tumor images vs. 2080 no-tumor images.To constitute our subsets, we divided the available images into training (2830 images), validation (515 images), and testing (814) subsets.During the splitting process, we ensured patient stratification (i.e., all the images of the same patient were grouped in the same subset to avoid data leakage) and class balancing.

Data pre-processing
We utilized the provided whole prostate segmentation to extract the mask centroid for each slice.We then standardized the field of view (FOV) at 100 mm in both  (   ) and  (   ) directions to ensure consistency across all acquisitions and subsequently cropped each image based on this value around its centroid.To determine the number of rows (  ) and columns (  ) corresponding to the fixed FOV, we utilized the pixel spacing in millimeters along the -axis () and the -axis ().The relationships used to derive the number of columns and rows are   =     and   =     , respectively.Furthermore, we resized all the images to a uniform matrix size of 96 × 96 pixels to maintain consistent pixel counts.Finally, we performed image normalization using an in-volume method.That involved calculating the mean and standard deviation (SD) of all pixels within the volume acquisition and normalizing each image based on these values using the z-score technique.

Architecture and training
We built different CNN models to automatically classify input images in the two classes: tumor vs. no-tumor.As for the architectures, we used the popular ResNet18 as the backbone for all the causality-driven models.To handle different image sizes in image recognition, many common architectures use an adaptive average pooling layer that always outputs a 1 × 1 shape before the classifier.It does this by adjusting its parameters (such as kernel size, stride, and padding) based on the input size.However, this reduces the dimensionality of the feature maps and ignores their 2D structure, which is needed for finding causalities.Therefore, we replaced the AdaptiveAvgPool2D layer of our ResNet18 with an identity layer in our experiments.
As described in Section 3.1, we could integrate the information of the causality map into the CNN classification in different manners.In this work, we developed six types of models and trained them to test the efficacy of the newly proposed architectures on medical images classification, namely: • Baseline.This model is a regular ResNet18 architecture, but we replaced its AdaptiveAvgPool2D layer with an identity layer.See Figure 1 (blue box) for a visual representation.
• Cat.This is a ResNet18 model we modified to embed the causal information via concatenation, as in Terziyan and Vitko (2023).See Figure 1 (magenta box) for a visual representation.
• Mulcat-full-causes.This variant exploits the causality map to obtain a set of causality factors that weigh the feature maps.In this model, we set the causality direction d = causes and the weighing mode m = full.
• Mulcat-bool-causes.It is similar to the previous, but we set the weighing mode to m = bool.
• Mulcat-full-effects.This variant turns the way the set of causality factors is obtained, by setting the causality direction d = causes.We use m = full in this model.
• Mulcat-bool-effects.It is analogous to the previous, but setting the weighing mode to m = bool.
As shown in Figure 1, the different types of models proposed expose the classifier to a different number of input features.Therefore, the classifier is modified for each type according to the number of new neurons entering the fullyconnected layer.
We optimized the way we computed the causality maps (using either the Max option (Eq. 1) or the Lehmer (Eq.2)) and, for the Lehmer option, we tried six different values of its parameter p: [−100, −2, −1, 0, 1, 100].Consequently, we trained seven models for each of the five types of causality-driven models, resulting in 35 causality-driven models plus one baseline model.
Regarding the training of the models, we utilized the cross-entropy loss as the criterion and Adam as the optimizer.We trained the models for 200 epochs and set up a learning rate (LR) scheduler to linearly decrease the LR during training.Specifically, the scheduler reduces the LR by gradually changing small multiplicative factors until the epoch number reaches a pre-defined milestone, which we set to 200 as the number of epochs.In the first epoch, we multiply the initial LR by 1.0, and then this factor changes towards 0.1.As for models' hyperparameters, we investigated two values of initial LR (0.01 and 0.001) and three values of weight decay (0.01, 0.001, and 0.0001).Accordingly, we trained the 36 models for each of the six combinations of hyperparameters and chose the best-performing model on the validation set.Additionally, we repeated all the 216 experiments four times by changing the seed governing the random processes of the scripts.

Quantitative evaluation
During training, we utilized the accuracy obtained by the models on the validation set to track their evolution during epochs, selecting the best-performing one once the training phase ended.Then, we evaluated such selected models on the external never-before-seen test set and reported their accuracy value.This way, we obtain a quantitative metric to compare the baseline architecture and the proposed methods.
Although a direct comparison among them already provides information on the efficacy of the proposed methods, we performed additional assessments.We conceived two ablation studies where the common idea is to distort the information usually brought to the network through the causality map.Concerning the cat option, the only contribution of the causality map to the classification resides in the flattened elements that are concatenated to the actual (flattened) feature maps.Therefore, a natural ablation architecture for such a setting would be to create a fictitious causality map filled with random probability values.We called this model the Ablation-cat.On the other hand, when it comes to the mulcat option, the main functionality is to extract a vector of meaningful causality factors that serve as weights to the feature maps.Hence, we created the Ablation-mulcat model, where we modify that vector to weigh features randomly rather than based on a principled way.This model comes in two variants according to the possible values of the causality factors mode, m.Indeed, when m = full, the 1 ×  vector of causality factors (i.e., weights) is replaced with a random vector of the same size with integer values ranging from 0 (a feature map is never cause of another feature) and  − 1 (it is cause of every other feature).Whereas, when m = bool, the values of the weights are randomly assigned to either 0 or 1. Since, in this setting, weights are hand-crafted, there is no need to consider the causality direction used; therefore, the ablation study we performed is valid for both d = causes and d = effects.

Qualitative evaluation
To further investigate the possible benefits of integrating causality into CNN for medical images' classification, we performed explainable AI (XAI) experiments on the best-performing model for each type.Specifically, we aimed to obtain class activation maps (CAM) for the networks' decisions in all six cases of our investigation: baseline model, cat model, mulcat-full-causes model, mulcat-bool-causes model, mulcat-full-effects model, and mulcat-bool-effects model.Since investigating the variability of the visual output when changing the XAI method used is outside the scope of our work, we chose the popular Grad-CAM method (Selvaraju et al., 2017), implemented in the pytorch-gradcam library (Gildenblat and contributors, 2021).For the same reason, we selected the last convolutional layer of our architectures as the target layer for which computing the CAM, and performed the analysis with standard parameters.A more systematic analysis would require investigating the CAM output on all layers of the CNN and optimizing the smoothing parameters.

Implementation details
All the experiments in this study ran on an NVIDIA A100 40 GB Tensor Core of the AI@Edge cluster of our Institute.We used Python 3.8.15 and back-end libraries of PyTorch (version 1.13.0,cuda 11.1), together with other libraries such as scikit-learn 1.2.0, grad-cam 1.4.8,pydicom 2.3.1, and pillow 9.4.0.Docker version 20.10.11 (build dea9396) was installed in the machine.To make results reproducible for each battery of experiments, we set a common seed for the random sequence generator of all the random processes and PyTorch functions.

Results
Our main results are shown in Table 1.While the baseline models achieved an accuracy of 68.38, embedding causality in different forms improved performance.For instance, when the causality map was used with the cat version, the best-performing models were obtained with Lehmer method (p=100) and achieved an accuracy of 70.07.As for the mulcat models, where the causality map is ultimately exploited in different ways depending on the mode m and the direction d, they all ranked above baseline.Specifically, the full-causes models were obtained with the Lehmer method (p=−100) and achieved 71.82 accuracy.The bool version of those models, instead, were obtained with Lehmer as well, but with p=1, and achieved an accuracy of 70.31.On the other hand, the full-effects models were obtained with the Lehmer method (p=0) and achieved 69.96 accuracy.Instead, when the bool version is used, the models reached an accuracy of 71.13 when using the Lehmer method with p=−1).Table 1 also shows the results of the ablation study.When used in the cat setting, the ablation models obtained an accuracy of 52.08, whereas the ablation mulcat models obtained accuracy values of 49.63 and 67.18 when using full and bool mode, respectively.
In addition to the quantitative experiments, we obtained qualitative results for the six models by comparison of their CAMs given the same input test image.Figures 3 and 4 show results for some cancerous and no-tumor cases, respectively, for which all the models yielded the same correct prediction.In each of those figures, rows represent the different scans, while columns represent from left to right the original T2w input image, the CAM of the baseline (non-causality-driven) model, and the CAMs of the causality-driven models with their specific settings.

Discussion
In this work, we presented a new method for automatically classifying medical images that uses weak causal signals in the image to model how the presence of a feature in one part of the image affects the appearance of another feature in a different part of the image.Our results seem to indicate that it is possible to exploit weak causality signals in medical images to enhance the performance of neural classifiers.In general, all the models obtained in a causality-driven way have achieved accuracy on the test set higher than the baseline.This superiority is sometimes limited to an increase of 2.31%, as in the case of mulcat-full-effects, but it can also reach 5.03% in the case of mulcat-full-causes.We found that all the most effective models used the Lehmer method to get the causality map, even though it required more memory and time than Max.We experimented with six different integer values for the parameter p to sample the range of possible values.A possible improvement would be to let the network itself learn the parameter p instead of giving it a fixed value beforehand.
Although this relative improvement is noticeable from a quantitative point of view, it is quite small in absolute terms, and it seems that the different causal models behave roughly the same way.To investigate potential benefits on a different level, we deepened the analysis and found out that significant differences can emerge on the XAI side.The trend that we noticed is that mulcat-full-causes and mulcat-bool-causes are consistently more focused on the discriminative parts of the image (i.e., prostate gland area) than the baseline and especially the corresponding -effects (e.g., they look at the rectum, bladder, or the lateral muscle bundles).This fact confirms our hypothesis that using causes and not effects allows the network to obtain more faithful results.
Among the methods that exploit causal information, cat proves to be one of the worst.That is evident both quantitatively (see Table 1) and qualitatively, where it often makes the network look at the wrong portions of the image.The reason for this behavior could be the considerable complexity added to the model to account for all the combinations of feature maps.In fact, on the classifier, the number of input neurons goes from  ×  ×  to  ×  ×  +  × , which with high  results in thousands of additional connections (e.g., 512 × 512 = 262144 new neurons for a ResNet18).
We confirmed the numerical results of our main study by conducting ablation studies on the actual influence of the causality factors (i.e., weights) on generating useful causality-driven feature maps.As anticipated, when we substitute the causal weights with random vectors, the accuracy of the final model is lower than its causally-driven counterpart (see Table 1).That seems to indicate that, even if weak, the causality signals learned during training assist the network to perform better.All in all, however, we must note that the performance values obtained across all our experiments are not yet sufficient for adopting such systems in clinical practice, but this is preliminary work that we will extend.Note that we have not compared with the results of other works that classify the same dataset because the focus of our work was different, and we did not design the whole system to respond to the PI-CAI challenge.
One of the limitations of our work is that we used only one medical dataset and one backbone architecture, although this is consistent with the pilot nature of our study.We plan to thoroughly investigate the effectiveness of different base convolutional models on multiple medical (and non-medical) datasets.Indeed, that would help in finding a better-suited architecture to detect finer details in the image and consequently extract more informative latent representations.
Moreover, we acknowledge that our methods consider potential causal relationships in pairs rather than among more than two features.That, of course, can lead to suboptimal results given the impossibility of excluding confounders.In future experiments, we would be interested in extending the operation to more variables and devising variations inspired by the classic PC algorithms of the literature on Causal Discovery in tabular data (Spirtes and Glymour, 1991).
There could be other directions to explore from our work, both on the application and architectural level.In fact, it would be interesting to see how the proposed methods perform in Few-Shot Learning scenarios (Fink, 2004;Fei-Fei et al., 2006) instead of fully supervised as in this work.Moreover, it would be interesting to draw inspiration from the visual attention mechanism Jetley et al. (2018); Yan et al. (2019); Schlemper et al. (2019), which extracts information from the network at different depths (local and global features).In this regard, we will propose methodological improvements based on extracting the causality map also from the internal layers of the network (not only from the last one).Our work could also be expanded by proposing new methods to combine causal information besides concatenation and weighted emphasis of feature maps.In the end, we foresee a possible integration of our methods within the convolutional block of generative models such as GANs (Goodfellow et al., 2020) and diffusion models (Ho et al., 2020;Rombach et al., 2022), to guide the generation of more realistic images.

Conclusions
This contribution is focused on discovering and exploiting weak causality signals directly from medical images via neural networks for classification.Our method consists of a CNN backbone and a causality-factors extractor module which computes weights for the feature maps to enhance each feature map according to its causal influence in the image's scene.We developed different architecture variants and empirically evaluate all of our models on a public dataset of prostate MRI images for prostate cancer diagnosis.Despite the mentioned limitations and planned future improvement, our findings suggest that adding a causality-driven module to CNN classification systems can produce better models.That is true not only because it enhances the overall classification results but also because it makes the model focus more precisely on the critical regions of the image, leading to more accurate and robust predictions.This aspect is especially important in medical imaging, where accurate and reliable classification is essential for effective diagnosis and treatment planning.

Acknowledgements
The research leading to these results has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 952159 (ProCAncer-I), and partially from the Regional Project PAR

Figure 1 :
Figure1: Overview of the different causality-driven settings proposed in this work, assuming  = 6 feature maps as an example.The tensor F of feature maps that are obtained from a CNN just before the classifier can be either flattened and used as they are (Option baseline) or can be leveraged to compute the causality map C via the max or lehmer method.Once obtained, C can be flattened as well and concatenated to the feature maps (Option cat) or fed to a causality factors extractor (see Figure2) to implement the Option mulcat.The latter produces a vector of causality factors that weighs the feature maps obtaining a causality-driven version of them, which is then concatenated to the original ones and fed to the classifier.Weighing mode m and causality direction d are two external signals used to tune the functioning of the system.This image is best seen in color.
Figure2: The internals of the causality factors extractor block of Figure1given an example causality map.Cyan squares in the causality map indicate whether the probability value of one element is greater than its element opposite the main diagonal.The causes box shows how the causality map is processed row-wise for each feature map: the number of times that feature is a cause of another feature is registered.Similarly, the effects box shows how the causality map is processed column-wise for each feature map.Before being summed element-wise, those two vectors are either passed as they are or the sign of their elements is reversed according to the causality direction d.The obtained vector is rectified and then returned as it is or passed through boolean filtering depending on the weighing mode m.This image is best seen in color.

Figure 3 :
Figure 3: Visual assessment of class activation maps for cancerous cases.Each row represents a different scan, and the columns represent the Grad-CAM outputs for the baseline model and for all the causality-driven proposed variants.Best seen in color.

Figure 4 :
Figure 4: Visual assessment of class activation maps for no-tumor cases.Each row represents a different scan, and the columns represent the Grad-CAM outputs for the baseline model and for all the causality-driven proposed variants.Best seen in color.

Table 1
Results of the best-performing models w.r.t the causality setting, the mode of computing the causality factors, and the direction used to encode the causality factors.We report accuracy results on the test set as mean and standard deviation (in brackets) values over four repetitions of the experiments with different seeds.