The Artificial Microscope.

. Optical microscopy demonstrated an incredible power and ability in producing large data set originated from biological samples by light interrogation and tunable in terms of spatial at temporal resolution down to the nano-and pico-scale, respectively. Such a data set is the core for developing an artificial microscope aiming to transform a label-free interrogation of the sample into a molecular-rich fluorescence-based image. The intelligent artificial microscope is AI-guided through a computational core based on three modules based on a convolutional neural network (CNN) and a tensor independent component analysis (tICA) un-supervised machine learning within a supervised deep learning strategy having the ambitious target to create a robust virtual environment "to see "what we could not perceive before". An interesting case study is related to understanding the visualisation of chromatin organisation.


A Microscope in the machine.
Modern optical microscopes, from super-resolved fluorescence to label-free mechanisms of contrast, are powerful instruments able to produce images that are rich sources of molecular information towards an unprecedented insight into the morphological and functional properties of biological cells at the nanoscale.Super resolved fluorescence microscopy [1], incorporating photochemical parameters from brightness to lifetime, and non-linear approaches, like those associated with multi-photon excitation able to exploit intrinsic fluorescence and SHG/THG, is coupled to labelfree polarisation methods like Mueller matrix microscopy [2], expanding the available data set.Today we are in the realm of multimodal optical microscopy boosted by artificial intelligence that makes intelligent the microscope [3].We aim to develop a microscope in the machine able to transform label-free imaging into molecular-rich images without the need to label the biological cells.

Data generator
The first module of the AI architecture for the intelligent microscope has a dual purpose of predicting a fluorescence image from a label-free image, learning the specificity given by fluorescent proteins and improving the contrast of the label-free image, and it facilitates the acquisition of a multimodal dataset since multiple fluorescence images can be predicted and constructed from a single label-free image without the need to label the sample and deal with photobleaching and photodamage issues.The architecture is represented in Figure 1.It is a convolutional neural network in which the input is a label-free image while the output is a single fluorescence image.The first part of the network is an encoder that compresses the information and learns the features.In contrast, the second part is a decoder to reconstruct an image starting from the encoder representation.The case study is related to chromatin compaction in the cell nucleus as function of potential cancer progression [4].

tICA
In general, the multimodal microscope produces a large amount of data coming from the different light-sample interactions that need to be structured.Therefore, the second module aims to merge data coming from different mechanisms of contrast [1,2].The aim is to realise an approach able to automatically find patterns of related changes and discover common features across multiple modalities.We would like to infer the same spatial patterns across modalities from these features.In this way will be possible to fuse information across several contrast mechanisms.Mathematically, the problem of representation is finding a projection of the data distribution that leads to some sort of 'intrinsic' coordinate system in which the data structure is most apparent.The idea is to use Independent Component Analysis (ICA), a model for finding meaningful, spatially-independent components in an unsupervised setting, in a tensor way [5], in order to infer the same spatial patterns across modalities and therefore to fuse information.The multimodal data is modelled as a sum of components (Fig. 3), each of which con be expressed as the tensor product of one spatial map ( = 1, … , ), one subject-course ( = 1, … , ) and one modality-course ( = 1, … , ) where: !,% are the spatial maps on n voxel for component i, each component i has a single spatial map for all modalities; #,% are the modality weightings for component i in modality t, tells which modality t uses to look into a specific component (and so into a spatial maps); %,$ are the weights for component i in subject r, form a link between the different modalities, and it is the appropriate place to look in order to find out which modalities are driving a particular component;  !,#,$ is the noise, i.e., everything can not be explained with the previous decomposition.

Fate prediction.
The last module is responsible for leveraging the parameters obtained from tICA to predict in a supervised manner whether a cell, imaged through various different contrast methods, is healthy or diseased.The advantage of the use of the tICA is that it is easy to understand which mode is most responsible for the prediction for each sample.

Conclusion
The artificial microscope aims to produce a new way of forming images in a "liquid" and tunable way [6] towards a better understanding of structure and function relationships in biological cells.The ambitious target is to remove the need to use contrast agents when examining living cells with the perspective of performing real-time biopsies using label-free data.The growth of interest and the increased number of AI approaches [7] are encouraging and also suggest that this represents an important and challenging step for computational imaging at the molecular level.

Fig. 1 .
Fig. 1.Neural Network architecture.The images for the training are acquired by means of a confocal microscope (Nikon A1r MP, Nikon) are HeLa cells nuclei stained with Hoechst 33342 for fluorescence