Radio Galaxy Zoo: Leveraging latent space representations from variational autoencoder

We propose to learn latent space representations of radio galaxies, and train a very deep variational autoencoder (VDVAE) on RGZ DR1, an unlabeled dataset, to this end. We show that the encoded features can be leveraged for downstream tasks such as classifying galaxies in labeled datasets, and similarity search. Results show that the model is able to reconstruct its given inputs, capturing the salient features of the latter. We use the latent codes of galaxy images, from MiraBest Confident and FR-DEEP NVSS datasets, to train various non-neural network classifiers. It is found that the latter can differentiate FRI from FRII galaxies achieving accuracy ≥ 76%, roc-auc ≥ 0.86, specificity ≥ 0.73 and recall ≥ 0.78 on MiraBest Confident dataset, comparable to results obtained in previous studies. The performance of simple classifiers trained on FR-DEEP NVSS data representations is on par with that of a deep learning classifier (CNN based) trained on images in previous work, highlighting how powerful the compressed information is. We successfully exploit the learned representations to search for galaxies in a dataset that are semantically similar to a query image belonging to a different dataset. Although generating new galaxy images (e.g. for data augmentation) is not our primary objective, we find that the VDVAE model is a relatively good emulator. Finally, as a step toward detecting anomaly/novelty, a density estimator — Masked Autoregressive Flow (MAF) — is trained on the latent codes, such that the log-likelihood of data can be estimated. The downstream tasks conducted in this work demonstrate the meaningfulness of the latent codes.


Introduction
Galaxy morphology is a powerful probe for investigating galaxy evolutionary processes, e.g.star formation history, the physical processes that galaxies undergo in their environment.Surveys like DESI [1] and SDSS [2,3], which make tens of millions of galaxy images available, provide insights into galaxy formation and evolution.On the radio counterpart, a great deal of effort has been made toward building datasets of radio galaxy images, e.g.Radio Galaxy Zoo [4], and upcoming large experiments like SKA [5,6] will increase the amount of data available.Most of the methods that have been considered to identify galaxies with different morphological features are supervised learning based, which heavily relies on labeling of the data.So far, they have been successful, although manual labeling process is not only expensive but could also potentially introduce biases in the data.Morever, for new scientific discoveries and searching for anomalies in large uncurated datasets, resorting to the feature extractors that are trained in a supervised learning setup is not optimal due to the fact that they are not robust to both noise and dataset shift.
Self-supervised learning (SSL) [7][8][9][10], which does not require data labeling, has been considered to uncover patterns in unlabeled dataset by learning robust representations of the high dimensional images.For example, [11] successfully used constrastive learning to search for galaxies that are semantically similar in large datasets.[12] considered SimCLR method [13] to learn representations of astronomical images from SDSS, and [14] opted for Boostrap Your Own Latent (BYOL) method [8] to extract important features of radio galaxies.
In this work, we aim to learn latent codes of radio galaxies using a generative model, Very Deep Variational AutoEncoder (VDVAE).Earlier work [15] used VAE, whose both encoder and decoder were composed of only fully connected layers, to generate synthetic images of Fanaroff-Riley Class I (FRI) and Class II (FRII) radio galaxies.Their approach was capable of generating realistic radio galaxy images, but the generated and reconstructed images were blurry, which could be attributed to the lack of expressivity of the network.Our main goal in this work, unlike the case studied in [15], is to highlight the ability of a deep generative model, VDVAE, to learn meaningful representations which can be leveraged for various downstream tasks.We also show how to estimate the log-likelihood of data using the learned representations, which is useful within the context of anomaly/novelty detection.We present the datasets used in our analyses in Section 2, and introduce the model considered in this study and other SSL based methods used for comparison in Section 3. The main results and the data likelihood estimation are reported in Sections 4 and 5 respectively, and we conclude in Section 6.

Data
We make use of the Radio Galaxy Zoo Data Release 1 (RGZ DR1) (Wong et al. 2023 in prep) to train and evaluate our generative model.The dataset used in our analyses contains ∼ 100, 000 unlabeled galaxies with their corresponding projected angular size in arcseconds.The input image to our model has a selected resolution of 64 × 64 pixels.To investigate the ability of our network, and that of other SSL based methods used for comparison in our analyses, to compress the images, we train various non-neural network methods on the latent features of galaxy images from two different datasets, MiraBest Confident dataset (MBC)1 [16][17][18] and FR-DEEP NVSS dataset [19] 2 .The idea is to identify FRI and FRII galaxy images in each dataset by only exploiting their representations.MBC and FR-DEEP NVSS have 729/104 (train/test) and 550/50 (train/test) instances respectively, and their images are also cropped to 64 × 64 pixels.The numbers of FRI and FRII in the training examples are roughly equal in both datasets, with an imbalance ratio ∼ 0.5.It is worth noting that the RGZ DR1 contains some MBC samples which are flagged out when training the feature extractors.

Models
In our investigation, we also train various SSL based methods and compare their performance with that of our network, specifically in terms of using the encoded features to identify galaxy types in labeled datasets, MBC and FR-DEEP NVSS.In this section we provide the technical details of each algorithm together with the hyperparameters selected to train them.

Very deep variational autoencoder (VDVAE)
Variational autoencoder (VAE) [20] is a type of generative model that is composed of an encoder q ϕ (z|x) -which is an approximate posterior given the intractability of the true posterior -, a decoder p θ (x|z) and a prior p θ (z).The two networks ϕ and θ are simultaneously trained by maximizing the evidence lower bound (ELBO) where the first term denotes the reconstruction error which measures how well the model recovers the inputs, and the second term is the Kullback-Leibler (KL) divergence, quantifying the dissimilarity between q ϕ (z|x) and p θ (z).It is worth noting that VAE outputs (either reconstructed or generated images) are known to suffer from blurriness, which can be potentially mitigated by controlling the contribution of the KL divergence to the total loss, using a hyperparameter β according to [21] ELBO = E z∼q ϕ (z|x) logp θ (x|z) − βD KL (q ϕ (z|x)||p θ (z)). (

3.2)
There are several variants of the VAE models but we consider the Very Deep Variational Autoencoder (VDVAE) model prescribed by [22] in our analyses.In order to increase the expressivity of both the prior p θ (z) and approximate posterior q ϕ (z|x), [22] proposed a hierarchical VAE comprising many stochastic layers of latent variables.The latter have different resolutions z 0 , z 1 , ..., z N which are conditionally dependent on each other according to where N is the number of layers, and the conditionals q ϕ (•) and p θ (•) are parameterized as diagonal Gaussians.In this work, we consider the latent variable with the lowest resolution z 0 which is a vector of length 256, i.e. a feature vector with 256 components.Figure 1 presents a schematic diagram of the model architecture.A residual block, which comprises 4 convolutional layers, is an important component of the two networks ϕ and θ.The encoder contains multiple stages which are built by stacking residual blocks (see red blocks in Figure 1).The output of one stage is downsampled by using average pooling.Each stage of the decoder is composed of chained top-down blocks.At the level of each top-down block, the prior, the posterior and the latent variable are computed by using one residual block, another residual block and one convolutional layer respectively; and a third residual block is used at the output.The feature maps outputted by the last top-down block at a given stage is upsampled using nearest neighbor method.It is noted that both networks (ϕ and θ) have the same number of stages and the dimensions of feature maps from two corresponding stages are the same.The input of each top-down block of the decoder at a given stage is concatenated with the output of the last residual block at the corresponding stage of the encoder (see Figure 1).The augmented feature maps resulting from this mixing are used to compute the conditionals q ϕ (•) and p θ (•) in Equation 3.3.The encoder and decoder have 6 stages of {3, 3, 2, 2, 2, 1} residual blocks and {5, 5, 4, 3, 2, 1} top-down blocks respectively.The decoder is chosen to be a bit deeper for good quality of generated images.Our choice might be suboptimal but good enough for our purpose.We consider RMSProp optimizer with a learning rate of 0.00002, momentum set to 0.9, and weight decay of 0.0001.We train the model for 100 epochs with batch size of 32.The learning rate is reduced by a factor 0.5 whenever the validation loss does not improve over 10 epochs during training, i.e. using ReduceLROnPlateau scheduler.We mainly follow the parameters in [22] with some adjustments due to computing resources.

SimCLR method
Contrastive learning method consists of minimizing the distance between two different augmentations of an image in latent space while increasing distance between representations of augmented views of different images, i.e. in latent space, an image and its transformations are clustered, and pushed away from other images and their corresponding augmentations.
In this work we consider SimCLR method [7] which applies two stochastic transformations to an image, resulting in two different augmented views xi and xj which form a positive pair { xi , xj }.The corresponding features of the latterh i and h j respectively -are extracted via an encoder.Finally, the representations {h i , h j } are projected into latent space using a multilayer perceptron (MLP), giving {z i , z j } as shown in Figure 2. The separation of positive pairs {z i , z j } in latent space is minimized while that of negative pairs is maximized using a constrastive loss, also known as NT-Xent (normalized temperature-scaled cross entropy loss) [7,23] where 1 [k̸ =i] is equal to 1, 0 if and only if k ̸ = i (for negative pairs) and k = i respectively, and τ is known as the temperature parameter.The function sim(z i , z j ) denotes cosine similarity sim(z i , z j ) = z i • z j /(||z i || ||z j ||).In our case, the stochastic transformation is defined by a set of data augmentations which are a random horizontal flip, a random vertical flip, a random crop, a random color jitter, and a Gaussian blur.We make use of Resnet-34 [24] as a backbone.The model is trained for 1000 epochs, using LARS optimizer with a learning rate lr = 0.001 and a batch of 1024 instances.The encoded features3 which are arrays of length 512 are projected into latent space using an MLP with one hidden layer, yielding vectors with 128 components.

BYOL method
In order to avoid collapsed representations, constrastive methods such as SimCLR learn to distinguish representations of distorted views of an image from those of different images, and the representations learned by SimCLR are of better quality with larger batches during training [7].Unlike SimCLR, BYOL method [8] bypasses the need for negative examples, but rather uses an online network that learns to predict the outputs of a target network (Figure 3).The former, defined by its parameters θ, comprises an encoder that outputs a representation y θ which, similar to the case of SimCLR, is projected into latent space z θ .To avoid collapsing results, a predictor q θ (z θ ), which processes z θ , is added to the online network (see Figure 3).The target network architecture is a copy of that of the online but its weights ξ are computed from an exponential average of θ at each training step according to where κ indicates the decay rate ∈ [0, 1].In other words, the gradients related to target parameters are not computed.Two augmented views, obtained from stochastic transformations, of an image xi and xj are passed through the online and target pipelines (see Figure 3) respectively, and the online network is trained to predict the target z ξ , resulting in refined representations.This boostrapping procedure helps the online network improve the quality of its learned representations as the training progresses.The loss is defined by a mean squared error between the target projection and the online prediction [8]

SimSiam method
SimSiam [10] rejects the need for a momentum encoder and negative examples altogether to prevent collapsing results.Like SimCLR, the parameters are shared between the two pipelines (blue and red branches in Figure 4), and similar to BYOL, an augmented view of an image is predicted from another augmented view of the same image.In SimSiam, two different augmentations of the same image xi and xj are encoded to obtain two representations y 1 and y 2 respectively.The latter are in turn projected into a latent space, producing z 1 and z 2 respectively.The prediction p 1 , which results from transforming z 1 via a projection head, is matched with the latent space representation z 2 of the second branch, by minimizing the negative cosine similarity [10] where z stopgrad 2 denotes stop-gradient operation on z 2 , which is the key aspect of the method.The prediction p 2 from the second branch is similarly matched with z 1 on which stop-gradient is acting as well and the total loss is given by [10] We select Resnet-34 as the encoder in our implementation, and use LARS optimizer with learning rate 0.0005.We train the model for 1000 epochs, and choose a batch size 1024.Both the projector and predictor are one hidden layer MLP that converts their input into an array of length 128.We also use the same stochastic transformations chosen in the case of both SimCLR and BYOL.It is worth noting that we use lightly ssl [25] framework and follow the examples in their documentation 4 to build the architecture of all the SSL based models in this work.

Reconstruction
The top and bottom rows in Figure 5 show some examples of input images from the unlabeled test set of RGZ DR1 dataset and their corresponding reconstructions by the decoder respectively.Results suggest that the model is able to reconstruct the targets.VAE is known to suffer from blurry generated/reconstructed images, and the examples presented in Figure 5 have been cherry-picked to highlight the predictive power of the algorithm which can recover a diffuse jet of a target (e.g. last right panel of the bottom row).It can be noticed that visually the diffuse structures surrounding the hot spots in the first left panel top row are prediction target

Visualization of the learned representations
The entire training dataset is fed to each encoder in order to extract the representations which consist of vector of length 256 for the case of VDVAE and 512 for the SSL methods since they all use similar backbone, i.e.Resnet-34.For visualization, dimensionality reduction method is used to further project the encoded features into two dimensional subspace.We consider t-distributed stochastic neighbor embedding (TSNE) [26] in our analyses to demonstrate the ability of each representation learning model to compress the galaxy images.The first, second, third and fourth panels in Figure 6 show the results obtained from VDVAE, SimCLR, BYOL, and SimSiam respectively.Each data point in each panel in Figure 6 denotes the compression of each input image.The color coding indicates the projected angular size of the galaxies.Figure 6 shows that in general each method has learned good representations, as evidenced by the clustering of galaxies with similar angular scales in the 2D subspace.This already points to the fact that the performance of our generative model is on par with that of the selected SSL based methods.To analyze the features extracted by the generative model, we compute their importance.Provided that the RGZ DR1 dataset is unlabeled, but only the angular scales are given, we compute the feature importance using random forest regressor by building a mapping between the latent codes and angular scales.What we address here is whether the value of a component correlates with its importance in a specific setup, which is regression in our case, given the dataset.As the number of instances in the training set (∼ 100,000) is relatively large for the algorithm, we train a random forest regressor with an inital number of estimators on the latent codes in batch of 1000.The number of estimators is increased by one when training with the next new batch.The results are presented in Figure 7.The solid blue line is the average value of each component of all the examples, whereas the solid red one denotes the importance (which is a score) of each component as outputted by the algorithm after training.Figure 7 shows that relatively few components carry information that is useful for inferring the angular scale of a galaxy image.In fact, by using Principal Components Analysis (PCA), we find that only two and four components encode 95% and 98% of the variance respectively.Figure 7 clearly shows that higher value of a feature component does not correlate with its importance for this regression task.

Using encoded features to classify galaxies
The trained encoders are used to extract the features of the galaxy images from both MBC and FR-DEEP NVSS, two labeled datasets that haven't been seen by the models during training.Leveraging the latent codes, FRI and FRII galaxies in both datasets are classified by using a variety of non-neural network algorithms -k-nearest neighbors (knn), random forest (rf), support vector machine (svm), logistic regression (lr), gradient boosting (gb) and extra trees (ext).We use scikit-learn [27] to implement the classifiers whose hyperparameters are presented in The metrics which are used to assess the classification performance of each method considered in this work are • accuracy which is a percentage of the number of true prediction in the test set, • roc-auc, also known as the degree of separability.In other words the ability of a classifier to differentiate between the classes.
• recall (or sensitivity), describing how well the algorithm minimizes the false negative, • specificity which is a complement of recall and says how well the negative samples are predicted.
It is noted that when computing the metrics, FRII galaxies are the positive classes and FRI the negative ones.However, since the goal is to be able to differentiate between FRI and FRII, we aim at maximizing both recall and specificity which is equivalent to recall in case where FRI is considered as positive class.For this downstream task, the representations of a training set of a dataset (e.g.MBC), which are obtained from a given feature extractor (e.g.VDVAE) are used to train various classifiers which are then tested on the representations of a test set of the same dataset.We adopt the same procedure for testing all feature extractors on all labeled datasets.The results are shown in Table 2.
On MBC dataset, results suggest that overall the representations learned by VDVAE, compared to those by SSL methods, carry a bit more information such that ext classifier generalizes better, achieving accuracy of 82% and roc-auc of 0.90.Moreover, both FRI and FRII are equally well classified, as evidenced by specificity and recall both equal to 0.82.The second, third, and fourth best classifiers, namely by rf, grad and knn respectively, on VDVAE derived representations outperform all the best classifiers (performance written in bold in Table 2) resulting from training on the SSL extracted representations.This further demonstrates the better quality of the latent codes (i.e.obtained from VDVAE).[14] and [28], both resorting to BYOL to learn the galaxy image representations from RGZ DR1, showed that by setting a threshold cut on the angular extent of the galaxies in RGZ DR1 (essentially removing the point source looking images from the training set) their knn achieved better accuracy 85.25% as opposed to the case which includes all instances in RGZ DR1 when training their BYOL.We find that the performance of our knn on classifying the representations of MBC dataset, obtained from VDVAE, is similar to that of knn in [14] where a threshold cut of about 16 arcsec was adopted.And the ability of our ext method (82% accuracy) to classify MBC galaxies is on par with that of knn (85.25% accuracy) in [14] where 29 arcsec threshold was adopted.
On FR-DEEP NVSS dataset, it appears that the top classifiers in all setups perform equally well, with a slight advantage of lr method classifying the representations obtained from SimCLR.Interestingly, a simple logistic regression generalizes well on the SSL extracted representations overall, indicating a linear mapping between the targets and the learned features.
[19] used deep CNN architecture whose weights had been previously trained on a different galaxy dataset for classification [29], an approach known as transfer learning which can be exploited when the number of training examples of a new task is relatively small.Their deep network achieved an accuracy of 73%, and roc-auc of 0.81, specificity ∼ 71% and recall ∼ 88% on FR-DEEP NVSS data.In comparison, all our top classifiers in all setups exhibit similar performance if not better.This demonstrates how relevant and powerful the compressed information is.

Similarity search
Another downstream task that exploits the latent codes is similarity search, which consists of finding images within a dataset that are semantically similar to a query image, using the vector representations.If θ query is the representation of the query and θ j that of any example from the dataset within which the search is conducted, the cosine similarity is given by The higher the score S the more similar to the query an image from the dataset is.The query drawn from MBC dataset is used to search for galaxies which are semantically similar to it in RGZ DR1.Overall the galaxy images retrieved from the latter exhibit bright hotspots on both lobes and diffuse jets (Figure 8), which are features shared with the query shown in left panel on the top row of Figure 8.Interestingly, all galaxies in Figure 8 appear to show roughly the same inclination.The image query presented in Figure 8  small angular extension, but bigger than a point source so that some features are visible.Similar to the previous case, the query is selected from MBC and search is conducted in RGZ DR1. Figure 9 shows that the selected galaxies based on the query (top left panel  in Figure 9) are semantically similar to the latter.They all roughly show diffuse emission between two bright lobes, and again are inclined in the same direction.

Generating new images
By sampling data points from the latent space and passing them through the decoder, new images are generated.We present in Figure 10 some examples of cherry-picked images that are produced by our model.Overall, the model is able to capture the salient features of the RGZ DR1 data, such as the hotspots and diffuse structures.It can be noticed that the projected angular scales of the generated images are relatively small, similar to those of the images in Figure 9 overall.It can be argued that this is due to the fact that the training dataset is strongly biased toward images with small angular size, as ∼ 70% of the galaxies has less or equal than 35 ′′ extension.It should be reiterated that the main objective is toward more compressing the data rather than the ability to generate new images (e.g. for data augmentation).But one possible solution, in order to reduce the effect of this bias in the generated images, is to train the generative model with a well balanced training set which contains roughly equal number of images with small and large angular scales.To further improve the quality of the generated images, the model can be conditioned on the angular extensions.We defer this to future work.

Estimating log-likelihood
We have seen in Section 4 that the latent codes carry meaningful information that can be exploited for some downstream tasks.The model parameters are optimized by maximizing   the ELBO which is a lower limit of the log-likelihood.As such, estimating the log-likelihood of an input (or an entire dataset), within the context of identifying an out-of-distribution sample, is required.One way to address that is to directly train a density estimator on the 64 × 64 pixels images.However, provided the usefulness of the latent representations with smaller dimensions compared with the images, they can also be used to train a density estimator so as to estimate the log-likelihood.In this section, we opt for the latter approach and train a Masked Autoregressive Flow (MAF) [30] -a state of the art density estimatoron the representations.We consider denmaf library [31] in our analyses, and first give a brief overview of normalizing flow and the MAF method before presenting the results.

Masked Autoregressive Flow (MAF)
Normalizing flow [32] is a type of generative model which consists of building an invertible differentiable mapping f : u → x between a data distribution x ∼ p(x) and a base density u ∼ π u (u) (also known as prior) which is generally Gaussian.Using the change of variable formula, we have that [30] This formulation allows the density estimation of the data after training.To generate a new data point x new , the method samples a point u from the Gaussian prior and uses the mapping f .The density p(x) can be expressed as a product of conditionals p(x) = i p(x i |x 1:i−1 ), parameterized as Gaussians, such that the ith conditional is given by [30] where u i and α i are computed using scalar functions, u i = f u i (x 1:i−1 ) and α i = f α i (x 1:i−1 ).The scalar functions (f u i , f α i ) are constructed using Masked Autoencoder for Distribution Estimation (MADE) [33] which consists of dense layers.The autoregressive property is fulfilled by using appropriate masking, and making a conditional at ith MADE layer dependent on the previous one i-1 th.In other words, MAF architecture is built by chaining up several MADE layers.There are several flow based models depending on how the invertible function is constructed, such as Real NVP [34], but in our study, we train MAF on the latent codes5 .

Log-likelihood of the data
We consider a MAF which comprises 48 MADE blocks, each block composed of 2 fully connected layers of 512 hidden neurons.We choose Adam optimizer with learning rate of 0.0005 and train the MAF model for 600 epochs on the latent codes of RGZ DR1 data.After training, we compute the log-likelihood of the representations of RGZ DR1, MBC and FR-DEEP NVSS and those of the new images generated by the decoder.Figure 11 shows the the log-likelihood histogram of each example in each dataset.The red, green, blue and black denote the loglikelihood distributions of RGZ DR1, MBC, FR-DEEP NVSS and fake images respectively.The fact that the support of the FR-DEEP NVSS log-likelihood distribution is a subset of the RGZ DR1 base distribution indicates that FR-DEEP NVSS instances are not out-ofdistribution (OOD) with respect to RGZ DR1 dataset.In other words, the results suggest that the examples in both datasets are drawn from the same underlying distribution6 .However, the lowest log-likelihood scores which, along with the class7 (withing round brackets), are provided on the top left corner of each panel.For example, the image shown on the first panel from the left, which has the lowest log-likelihood, appears to be a bent-tail galaxy whose jets are bent.The second panel, labeled as FRII, shows two bright lobes which do not appear to be from the same central galaxy based on the diffuse structure surrounding each of them.The third and fourth panels present a core with one-sided jet and a bright spot seemingly disconnected from a nearby faint object respectively.Provided that the learned representations can be utilized to retrieve similar images in a dataset, we search for images in RGZ DR1 that are semantically similar to the outlier8 corresponding to the lowest loglikelihood (top left panel in Figure 12).Search results are shown in Figure 13.On the one hand it is clear that none of the retrieved images are semantically similar to the query, demonstrating the efficiency of the density estimator to assign low log-likelihood to images with features that haven't been seen during its training9 .On the other hand, interestingly, it can be noticed that, like the query, each galaxy image in Figure 13 is located on the bottom right corner of the panel.This shows that, although the patterns are not similar, the feature components of the latent codes are such that the group of pixels that carries most of the information is roughly located at the same corner in each panel of Figure 13.This test further demonstrates the meaningfulness of the learned representations.Lastly, Figure 11 also implies that the decoder is able to mimic RGZ DR1 data, as evidenced by the log-likelihood of each generated image that is well within the log-likelihood value range of the RGZ DR1 dataset.

Conclusion
We have shown in this work that it is possible to learn meaningful latent codes of radio galaxy images that can be leveraged for some downstream tasks.We have trained on an unlabeled dataset a variant of Variational AutoEncoder (VAE), whose both approximate posterior and prior are more expressive (compared to a vanilla VAE) by resorting to a hierarchical structure composed of many stochastic layers of latent variables.We have assessed the overall performance of our VAE model by looking at its ability to reconstruct the inputs, and analyzing how meaningful the representations it has learned during training are.In our investigation, we have also trained various SSL based methods, SimCLR, BYOL, SimSiam, and compared their performance in terms of classifying galaxies from labeled datasets with that of our model.The features extracted by each model are visualized in a two dimensional subspace by using t-SNE, a dimensionality reduction method.To investigate if the learned representations from different models carry meaningful information, six different classifiers -k-nearest neighbors (knn), random forest (rf), support vector machine (svc), logistic regression (lr), gradient boosting (grad), and extra trees (ext) -are trained on them in order to identify FRI/FRII galaxies from two different datasets.Similarity search, which is another downstream task employing the compressed data, has also been conducted.Although the capacity of the VDVAE model to generate new samples is not our primary objective in this work, we have checked how good it emulates the training data.Furthermore, we have estimated the log-likelihood of data by training a Masked Autoregressive Flow (MAF), a state of the art density estimator, on the latent codes.This is especially useful in the context of finding anomaly/novelty in a dataset.We summarize our findings as follows: • Results suggest that our model is able to recover the inputs, capturing features like jet and diffuse structure, which indicates that the reconstructed images don't seem to suffer from blurriness, a known issue with VAE models in general.
• The galaxy representations obtained from each model are well clustered with respect to angular size, implying that each method has properly learned to encode the high dimensional data.
• In a setup, the representations of galaxies from a labeled dataset, either MBC or FR-DEEP NVSS, are retrieved by a feature extractor (VDVAE, SimCLR, BYOL or SimSiam) and used to train several non-neural network classifiers.In general, for MBC dataset, the information carried by the features extracted by the generative model has slightly better quality compared to those by the SSL based models.The four best classifiers trained on the VDVAE latent codes -all achieving accuracy ≥ 76%, roc-auc ≥ 0.86, specificity ≥ 0.73 and recall ≥ 0.78 -outperform all the best classifiers of other setups in this work.The results on classifying galaxies in MBC dataset using learned representations also show that the performance of our generative model is comparable to that of the model in [14].The top classifiers in all setups perform equally well on the FR-DEEP NVSS dataset.Interestingly, the performance of simple classifiers in our analyses is on par, if not better, with that of a CNN based model used in [19].This shows how meaningful the learned representation is.
• The learned representations can be used for similarity search, as evidenced by the retrieved images that are semantically similar to the query image.We carry out searches for galaxies with large and small angular sizes.The results in both cases are consistent in the sense that all the images found exhibit similar patterns.In addition, the inclination of the galaxy in the query image is roughly found in all galaxies returned by the search.The importance of this application was highlighted in [11], where the encoded features were leveraged to search for similar images in a large dataset.
• We find that the decoder is capable of generating new images that are comparable with the training data overall.Neverthless, the generated images tend to be of smaller angular size, which can attributed to the bias in the dataset.The possibility that the decoder still lacks power, and hence requires more fine-tuning, can not be ruled out.However, a sufficiently powerful decoder is prone to a posterior collapse [35,36] where the latent codes is no longer useful as they haven't been learned by the model.Provided that the main objective in this study is to learn the latent codes, increasing the power of the decoder in order to optimize the ability of the model to generate fake images needs to be approached carefully.
• The galaxies in FR-DEEP NVSS appear to have been drawn from the same distribution as those in RGZ DR1 dataset.This is evidenced by the log-likelihood values of the former which lie within the range of those from the latter.However, some galaxies withing MBC dataset are associated with log-likelihood scores outside RGZ DR1 base distribution 10 , and therefore are considered OOD (solely based on the likelihood as a metric).As a way to further validate both the usefulness of the latent codes and the density estimation by the MAF model, we search for images in RGZ DR1 that are semantically similar to the OOD instance (from MBC) associated with the lowest loglikelihood.We find that the search fails to return similar galaxies, corroborating the fact that the query image is indeed an OOD with respect to RGZ DR1 dataset.It is also found that the new images generated by the decoder are in-distribution with respect to RGZ DR1, as the estimated log-likelihood of each new instance is well within the log-likelihood distribution of RGZ DR1.It is worth noting that although both the VDVAE encoder and decoder are trained simultaneously on the RGZ DR1 data, the new images which are obtained by sampling from latent space and reconstruction via the decoder have never been seen by the encoder which extracts the latent codes.This shows that the decoder is able to emulate the RGZ DR1 data.
The generative model in this work has shown promising performance.For future investigation one question that can be addressed is the impact of the input dimensions on the results, for instance by considering 128 × 128 pixels resolution of the images used for training.This is one way to assess the robustness of the method.

Figure 1 :
Figure 1: Schematic diagram of the VDVAE model.The red and blue blocks denote the residual blocks of the encoder and top-down blocks of the decoder respectively.The black arrows indicate mixing via concatenation along the channel dimension.

Figure 3 :
Figure 3: Schematic diagram of BYOL method.The online network components are shown in blue, whereas those of the target are in red.

Figure 4 :
Figure 4: Schematic diagram of SimSiam method.In the first blue branch, the prediction p 1 is matched with the representation z 2 using negative cosine similarity, indicated by NCS k=1,2 .The dashed red arrow indicates that z 2 acts like a constant with zero gradient.Similarly in the second red branch, p 2 is matched with z 1 which is turned into a constant, as indicated by the dashed blue arrow.

Figure 5 :
Figure 5: Reconstructing some examples of images from the test set from RGZ DR1 dataset.Top row denotes the images from the test set and the bottom row shows the corresponding images (i.e.output image of the decoder when feeding an input image) that are recovered by the decoder.

Figure 6 :Figure 7 :
Figure 6: Latent space representations of galaxies from the training set learned by different methods, VDVAE, SimCLR, BYOL, and SimSiam.For visualisation, TSNE method is used.The color coding indicates the angular scale of the galaxy in arcseconds . query

Figure 8 :
Figure 8: Similarity search exploiting the learned representations of galaxies.The top left image is the query from the test set in MBC and all the remaining images are obtained from searching in the RGZ DR1. query

Figure 9 :
Figure 9: Similarity search where as opposed to Figure 8, the query image has a relatively small extension.The top left image is the query from the test set in MBC and all the remaining images are obtained from searching in the RGZ DR1.

Figure 10 :
Figure 10: Examples of images that are generated by the trained decoder from sampling in the latent space.

Figure 11 :Figure 12 :Figure 13 :
Figure 11: Histograms of log-likelihood of all samples in each dataset, RGZ DR1 (red), MBC (green) and FR-DEEP NVSS (blue).The log-likelihood distribution of new images generated by VDVAE is shown in solid black line.

Table 1 :
For each method, the presented hyperparameters are the ones that are different from their default values in scikit-learn.
has larger angular extensions, so for a further test, we carry out another search for galaxies with relatively