A Technical Introduction to Transmission Electron Microscopy for Soft-Matter: Imaging, Possibilities, Choices, and Technical Developments

biology, the transmission electron microscope is one of the most widely applied structural analysis tool to date. It has the power to visualize almost everything from the micrometer to the angstrom scale. Technical developments keep opening doors to new fields of research by improving aspects such as sample preservation, detector performance, computational power, and workflow automation. For more than half a century, and continuing into the future, electron microscopy has been, and is, a cornerstone methodology in science. Herein, the technical considerations of imaging with electrons in terms of optics, technology, samples and processing, and targeted soft materials are summarized. Furthermore, recent advances and their potential for application to soft matter chemistry are highlighted. 7. Figure 6b,d,e,i as well as Figure 7 were made at the Center for Structural Systems Biology in the Multi-User CryoEM Facility, established with substantial support from the DFG (grant numbers INST 152/772-1 and 777-1 FUGG). This work was in part made possible by an EMBO long-term fellowship (ALTF 356–2018) awarded to L.E.F. The authors declare no conflict of interest.


Introduction
Since its invention by Ernst Ruska in 1931, [1] transmission electron microscopy (TEM) greatly influenced the course of modern-day science. While initially the high vacuum and radiation damage where thought to strongly limit its usability, the development of sample preparation techniques led to TEM playing a significant role in material sciences, physics, chemistry and biology. Ernst Ruska was awarded the 1986 Nobel Prize in Physics "for his fundamental work in electron optics, and for the design of the first electron microscope." It is one of the most influential scientific inventions, literally determining of cryo-EM is still relatively scarce. Between 2010 and 2015, only 29% of the soft-matter papers in which electron microscopy was used deployed cryo-EM. [33] Soft-matter chemistry could take much more advantage of the many technical developments in the field of structural biology than it does currently.
In the field of soft-matter chemistry, many different tools and techniques are used to measure, quantify and analyze the different properties of a chemical product and its behavior. TEM is one of them. Although most organic compounds are too small to be studied by TEM, the electron microscope is able to visualize their supramolecular organization. [34] Finding its origin in viral studies, the term self-assembly was first mentioned in relation to the organization of proteins and lipids into larger complexes [35,36] or crystals. [37] It describes the phenomenon that molecules can organize into larger structures as a consequence of their design. [38] The principles of self-assembly in biology have been adapted to advance soft-matter chemistry, [39] giving rise to, for example, the field of supramolecular chemistry, where multiple molecular species can organize themselves into structures with defined microscopic properties and macroscopic characteristics (films, layers, gels, membranes, vesicles, micelles, tubes, surfaces, solids, etc.). [40,41] These products of self-assembly are developed into new functionalized materials such as responsive materials [42][43][44][45] self-healing materials [46][47][48][49] and loaded nanocarriers. [50][51][52][53] In particular for the study of these larger order structures, the input of (cryo-)TEM has been indispensable, giving direct and invaluable structural evidence and insight into selfassembly [54] and the rearrangement of the self-assembly products as a consequence of molecular responses to external cues, such as changes in pH, [55,56] temperature [57,58] or polarity. [59,60] Not everyone needs to be an expert in the TEM field to make proper use of this technique. For those who are more interested in TEM as a component of their work, as well as for scientists new to the EM community, this review aims to provide a detailed understanding of what electron microscopy offers, its capabilities, how imaging conditions and sample preparation can influence the questions that can be answered and what determines the achievable resolution. TEM imaging is becoming increasingly popular for the study of soft materials, but the field will only increase in impact when it makes use of the latest technical developments in TEM. While its power to study complex systems in a near native state is unmatched by any other technique, cryo-EM has unfortunately not yet become the routine method of choice in comparison to staining or even drying of specimens. [33] Here, we discuss the microscope, image formation, interpretation and image processing, and highlight methods available from the fields of structural and cellular biology, such as single particle averaging, phase-plate technology and cryo-FIB milling. This review provides insight into the technical background of cryo-EM, the choices and decisions that are made, and the new possibilities and how their possible contribution to soft-matter studies in the future.

Electrons
According to quantum mechanics, electrons, like light, can be described by the concept of wave-particle duality. While both can have variable speeds and wavelengths, the difference is that the wavelength of an electron is a function of its speed. It decreases with increasing acceleration voltage. [61] With decreasing wavelength, the resolving power increases. [62] In contrast to X-rays, electrons interact strongly and specifically with matter by scattering and, unlike neutrons, their negative charge allows electrons to be focused by electromagnetic lenses. [63] The latter is valuable, because it allows the study of matter by imaging, rather than only by diffraction, where imaging includes the structural phase information in the experimental data and diffraction does not.
Electrons are optimal for studying soft materials at high magnifications, except for two aspects that may constrain application: i) samples must be analyzed in vacuum and ii) possible radiation damage. Both issues demand measures to limit their effect and are recurring themes in microscope design, sample preparation, imaging and image processing. Putting a specimen in vacuum without protection implies that the sample will be dehydrated. Water plays a key role in many processes including, chemical reactions, structural support through hydrophobic interaction and hydrogen bonding, mole cule stability, conformations, reactions [64] and molecular self-assembly. [33] Removing water critically influences the structures and many of these processes. Without precautions, cells, membranes, aggregates and artificial systems will all be deformed and rearranged, and molecular concentrations will be altered.
How much energy a sample in the TEM has to endure depends on the desired resolution. At a higher magnification, the illuminating beam will typically be focused on a smaller area. Linda E. Franken did her Ph.D. in the group of Egbert Boekema in Groningen, where she worked on both biological and soft-matter samples and developed a strong background in electron microscopy and single particle analysis. In 2017, she joined Kay Grünewald's group "Structural Cell Biology of Viruses," where she is studying infection processes inside cells making use of a combination of SPA, FIB-milling, and tomography.
The electron exposure at the specimen level is described by the dose rate (e − Å −2 s −1 ). The electron energy depends on the acceleration potential of the system. [65,66] The absorbed energy dose is only a fraction of the total dose, and depends on the electron velocity (1/(electron velocity) 2 ). As a consequence, higher operation voltages suffer less from energy transfer to the specimen. However, at higher operation voltages, the number of interactions between beam and sample is also reduced, leading to less signal (contrast) in the images. Exposure to beam radiation, in the typical range of operating voltages in the field of soft-matter TEM (80-300 keV), may cause severe damage to the sample, ranging from re-conformation and de-crystallization to breaking of atom bonds, removal of side-chains and, in general, loss of mass. [65] In order to limit radiation damage, the dose a specimen is exposed to has to be minimized. On the other hand, because the detection of electrons is a stochastic process, a low electron dose (exposure) will inherently lead to very noisy images. [67]

Scattering
A TEM image is formed by propagating a bundle of waves (beam) onto a sample. Some of the waves interact with the sample after which the resulting image is magnified by a series of lenses. There are two types of interactions between electrons from the beam (primary electrons) and atoms from the sample: elastic and inelastic scattering. [66] In an event of elastic scattering, the combined kinetic energy (E = mv 2 ) and the combined momentum (E = mv) of the atom and the interacting electron is the same before and after their interaction. The large difference in weight between an electron and a nucleus means in practice that electrons hardly transfer kinetic energy or momentum to the nucleus and these scattering events are therefore essentially elastic: the (high resolution) information from the sample is present in the elastically scattered electron. Inelastic scattering occurs when the amount of kinetic energy or momentum is not conserved. This is the case for electron-electron interactions. Energy is transferred to the sample and causes radiation damage. Inelastic scattering processes are less localized than elastic scattering and cannot contribute to high resolution information. Furthermore, the reduced energy of primary electrons means that they are not interacting with the lenses in the same way as the electrons that preserved their energy, leading to differences in focal lengths.
The number of protons in an atom determines the number of primary electrons to scatter under specific angles. The closer the primary electrons pass the nucleus and the heavier the nucleus, the more the path of the electron will be bent by the (screened) electric potential of the nucleus and the more electrons will interact with this atom. Scattering from an atom depends on the atomic number (Z) of that atom. Inelastic scattering increases with Z 1/3 , while elastic scattering increases with Z 4/3 . To increase the total number of scattering events and change the ratio between elastic and inelastic scattering, heavy metals can be added to samples that contain mainly carbon such as most soft materials. This is typically referred to as staining.

The Electron Microscope
The transmission electron microscope was invented by Ernst Ruska and Max Knoll in 1931. Although modernized, the general design of the electron microscope remained essentially unchanged. [68] Image quality is defined by its resolution, contrast and signal-to-noise ratio (S/N). The resolution of electron microscopes improved steadily from around 100 nm in the very early models to 0.1 nm (1 Å) and even better nowadays. [69] Phase contrast imaging resolving single atoms or atomic clusters is sometimes referred to as high-resolution transmission electron microscopy (HR-TEM). To achieve this, a highly coherent source, a well-aligned system, a good detector and a suitable sample are essential. Using an HR-TEM does not necessarily imply high-resolution images. Some samples do not enable high resolutions. Examples of such samples are extra radiation sensitive materials (e.g., sugar molecules) or samples which aggregate or self-assemble into larger amorphous assemblies [70] or bowlshaped particles. [59] Also, an increased sample thickness increases the number of multiple scattering events at the cost of resolution. [71] The term HR-TEM should therefore be used with care.
The electrons travel, roughly 1 to 2 meters through the high vacuum of the column from top to bottom. The main components are: the electron source, condenser lenses, specimen holder, objective lens, projector lenses and screen/camera (Figure 1). Each of these components can be of varying design and hold various attributes which all contribute to the final image quality.

Electromagnetic Lenses
The column is a stack of electromagnetic lenses along the optical axis of the electron microscope ( Figure 1). From the top, underneath the electron source (cf. Section 3.2) there are two or more condenser lenses, which control the spot size (C1) and the intensity (C2). They make a demagnified image of the crossover of the beam and function to control the beam shape. Below the condenser lenses sits the objective lens. Most TEMs have an objective twin-lens comprised of two electromagnetic fields, one above the sample, giving extra control over the beam, and one below the sample, magnifying the image roughly 50 times. Following the objective lens, the beam goes through the diffraction lens. In diffraction mode, this lens is weakened, such that it passes the image from the back focal plane of the objective lens, resulting in an image of the diffraction pattern of the specimen. Below this lens, a series of magnifying lenses functions to magnify the intermediate image.
Electromagnetic lenses suffer from three types of aberrations ( Figure 2): spherical aberration (C s ), chromatic aberration (C c ), and astigmatism. These can significantly reduce the image quality and are especially important in the condenser lenses, determining the beam quality, and in the objective lens, of which all aberrations get magnified several thousand times. Spherical aberration is the result of the fact that an electron that passes closer to the center of an electromagnetic field will be exposed to a weaker force compared to one passing the lens closer to the coils (Figure 2b). To overcome this problem electron microscopes work with very small opening angles which negatively affects the maximum resolution that can be reached. [62] To avoid an uneven rotational contribution of the C s of the objective lens (coma), TEM images are taken with illumination parallel to the optical axis (coma free alignment). [72,73] A C s corrector, a multipole lens that creates a negative spherical aberration can nullify the C s effects. [74] Alternatively, the C s effect is reduced by taking a smaller C2 aperture ( Figure 2c) Chromatic aberration (Figure 2d) arises when not all electrons passing the lens have the same speed. Faster electrons are less strongly affected by the current of the lens compared to slower electrons. A monochromator corrects the ΔE (<0.2 eV) of the beam (see also Section 3.2.1). An energy filter, in-column or postcolumn, can remove electrons that have a different wavelength due to inelastic scattering (ΔE = 1-100 eV). An energy filter is an electromagnetic field acting as a prism, sorting electrons by their energy, followed by a pinhole selecting either zero-loss electrons (electrons that did not transfer energy to the sample) or specific energy losses. Both in-column (omega) and post-column energy filter implementations exist. While energy filters are not essential to obtaining high-resolution information, removing inelastically scattered electrons improves the S/N ratio, especially in thicker specimens where the number of interactions is larger, which is a benefit for techniques such as tomography (see section 6).
Astigmatism, is the result of a difference in the strength of the lens in x and y directions. In the condenser lenses this results in an oval-shaped beam and in the objective lens this results in a different focusing strength in the x and y direction of the image. All microscopes offer the option to correct astigmatism prior to imaging. Small remaining levels of astigmatism in an image can be corrected computationally, but more severe levels can also lead to variations in magnification, especially if the illumination is not parallel. These effects are much harder to correct for.

Quality Parameters
The performance of the source is described by three factors: stability, brightness and coherence. Stability is the variation in the electron current (emission) from the source. Brightness is the current density per unit solid angle (A cm −2 sr −1 ). Coherence is comprised of spatial and temporal coherence. A perfectly coherent beam is composed of electrons that have the same wavelength and phase and originate from one point. Spatial coherence depends on how much the source approaches a point source (Figure 3). When electrons originate from a larger area, they are more weakly related and interfere within the beam, leading to a faster decay of high-resolution information in the image. This per-frequency (q) decay is described by the spatial coherence envelope function where C s is the spherical aberration of the objective lens, λ is the wavelength of the electrons, Δz is the applied defocus, and α i is Small 2020, 1906198 Figure 2. Spherical and chromatic aberration in electromagnetic lenses. a) A perfect lens has a focal point (F). b) Spherical aberration originates from the fact that electrons that go through the lens experience a stronger electromagnetic field when passing closer to the coil compared to electrons that pass through the center of the lens, resulting in a focal area of least confusion, rather than a point. c) A C2 aperture can reduce the size of this area. d) Chromatic aberration results from variations in electron speed. Slower electrons (blue) will be focused more compared to faster electrons (red) and the electron magnetic lens will have a focus range rather than a single focal point.
the half-opening of the illumination angle (which is determined by the perceived size of the source). Although the coherence of a source is a given, its effect and thus the envelope function can be improved in several ways: by correcting the spherical aberration (C s corrector), decreasing the wavelength (higher acceleration voltage) and limiting the applied defocus. The latter two come at the cost of contrast. Most importantly, the illumination angle α i (practical source size) can be reduced by lifting the cross-over of the C2 lens (stronger current), but only when there is no demand for parallel illumination, in which case the crossover needs to be placed exactly in the front-focal plane of the objective lens. The illumination angle is also reduced by taking a spot-size as small as possible (larger lens current number of Condenser lens 1) (Figure 3b). This reduces the intensity of the Small 2020, 1906198 Figure 3. Spatial and temporal coherence. a) An incoherent source emits light or electrons that have differences in wavelength and do not originate from one point. Spatial coherence describes how much the source approaches a point source and temporal coherence depends on the difference in wavelength. b) Spatial coherence in TEM is dependent on the size of the source, e.g., thermionic gun versus field emission gun. The latter more closely approaches a perfect point source. With a perfect point source, the illumination angle is infinitely small (b1). When this is not the case, it could be optimized by increasing the distance between sample and cross-over by increasing the strength of the C2 lens, but for parallel illumination in a two condenser lens system this current is fixed. Furthermore, spatial coherence is optimized by going to a spot size as small as possible, but at the cost of intensity (b2 in comparison to b3).
beam, which should not be compensated by decreasing the C2 current, as this would nullify the gain in coherence. Ultimately, this means that the minimal size of the illumination angle, and thereby the spatial coherence, is determined by the minimal dose rate the detector needs for recording. When imaging with a condensed beam, a smaller C2 aperture can improve the coherence, but at parallel or divergent illumination, its effect is limited to somewhat selecting electrons coming from the core of the source, and therefore the role of the C2 aperture is mostly to control the beam diameter. Spatial coherence determines the quality of phase-contrast images, the sharpness of electrondiffraction patterns, and thus the quality of diffraction contrast images from crystalline materials. Temporal coherence (coherence length) describes how similar (monochromatic) the wavelengths of the electrons are and is dependent on two factors: the stability of the power supplies and the spread in electron velocities in relation to the total voltage (relative energy spread, ΔE/E (eV)) of the beam. The temporal coherence envelope function is described by where λ is again the electron wavelength, C c is the chromatic aberration of the lenses, and ΔE/E is the relative energy spread. The temporal coherence of the beam is improved by going to higher acceleration voltages (smaller wave length and smaller relative energy spread). Furthermore, a monochromator can reduce the energy spread (ΔE) to enable reaching sub-angstrom resolution. [75] This is valuable in the field of material sciences in particular, where different TEM applications are used. [76,77]

Electron Sources
There are two types of electron sources: thermionic and fieldemission guns (FEGs). Thermionic sources work on the principle that if a material is heated enough, electrons will spill out as their energy overcomes the natural barrier that retains them (work function). [66] Two materials that do not melt when heated this much are tungsten and lanthanum hexaboride (LaB 6 ). Tungsten is used as a v-shaped wire functioning like a traditional light bulb. The more energy is applied, the more electrons are emitted from these materials, but also the shorter the lifespan of the source. Tungsten-wire sources have the poorest performance, but are cheap in material and maintenance costs, as they are robust and are least demanding in respect to their operating environment (e.g., vacuum). LaB 6 cathodes have a smaller tip-radius (r), and thus have a smaller cross-over and corresponding opening angle, and they have a low work function resulting in a higher brightness and a better coherence. FEGs are the best sources currently available. They operate on the principle that the strength of an electric field is increased at sharp points. Tungsten is used for FEGs, because it can be shaped into a very fine tip and, optionally, it can in additionally be coated with zirconium oxide to reduce the energy barrier and heated to overcome the energy barrier (cold FEG vs Schottky FEG). FEGs are the most expensive sources because of the source architecture itself, and the high demand on the vacuum, since emission is only possible when the source material is free of contaminants. Cold FEGs are therefore less stable than heated sources since, despite the high vacuum, cold FEGs suffer from adherence of gasses to the tip. This gradually increases the work function and thereby causes gradual diminishing of the electron current. Hence, they need regular "heating" of the tip to clean it and keep up performance. [78] Still, cold FEGs are clearly better than Schottky FEGs in terms of coherence and brightness, and in high-end material science applications, cold FEGs are currently the preferred choice. The research questions that need to be answered determine the minimal demands for the electron source and microscope, as well as which operating voltage is best suited. There is a tradeoff between brightness and contrast, as brightness increases and contrast decreases with increasing operating voltage. Also, the number of interactions with the specimen decreases with increasing voltage, i.e., the mean free path of the electrons becomes larger. A thicker soft-matter sample is therefore better imaged with a higher accelerating voltage, allowing thicker objects to be penetrated by transmission microscopy. Higher voltages accordingly also result in less radiation damage and involve smaller wavelengths, which allow higher resolutions. [79] Furthermore, the occurrence of multiple scattering events is determined by the specimen thickness and reduces with electron energy. However, it is not always desirable to go to the highest possible accelerating voltage. Some samples and research questions may benefit more from having more contrast compared to resolution. Recently, it was reported that the gain in contrast at lower acceleration voltages is larger than the cost of increased radiation damage, [80] although in practice temporal coherence may pose a limitation to this. Most important for the resolution in phasecontrast images is the information limit, which is determined by the coherence of the beam and remains best in FEGs. Although C s correctors and monochromators offer additional improvements, they are currently only rarely used in state-of-the-art structural biology studies and are not strictly necessary to reach high resolution. In practice a tungsten filament can be used to resolve a ≈4 Å lattice (diffraction contrast). However, for high-resolution phase contrast imaging in for example proteins, an FEG is essential. [81] Although, high resolution may be present in images of soft-matter, this can, due to the low S/N ratio, only be observed, when multiple images of repetitive or identical structures are combined through image processing (see Section 6).

The Detection Device
The last component of the electron microscope that has a great influence on the image quality is the detector in the image plane ( Figure 1). Originally, images were recorded on film, which provided decent resolution, but, due to the enormous associated workload, did not easily permit processing of large numbers of images and, importantly, did not allow the instant feedback required for most data acquisition automation steps. Of lower quality, but with the power to record many images automatically, is the slow-scan charge-coupled device (CCD) camera. A CCD camera records electrons indirectly by translation into photons in a scintillator layer, a process that results in Small 2020, 1906198 some blurring of the image due to the thickness of that layer (multiple scattering) and the spread of the light created from the incident electron. A thicker scintillator layer offers more sensitivity but at the cost of local precision. Until a decade ago, the CCD camera was most common and its maximum achievable resolution for proteins was limited by its sensitivity and a combination of the mechanical stability of the sample stage, beam-induced sample motion and the rather long exposure times required to obtain sufficient signal.
With the development of direct detection devices (DDD), [31,82,83] the field of structural biology underwent a true resolution revolution. [24][25][26]31] DDD cameras are capable of directly detecting electrons, avoiding blurring and improving recording speed. Also, the chip is thinner and therefore limits back-scattering of electrons in the chip. [84] Lastly, the fast processing of electrons opened the door to recording movies in combination with electron counting. The latter allows the determination of electron position to sub-pixel accuracy. Processing movies by frame allows the correction of beam-induced movement. [85] Also, the low-resolution information from all frames can be used to have a strong S/N ratio for alignment, whereas after reconstruction, the later recorded frames that suffered from radiation damage from too much dose can be down-weighted, filtered or removed. Also in terms of detective quantum efficiency (DQE), a camera quality parameter describing its efficiency in terms of S/N per frequency, DDD cameras are significantly outperforming CCD cameras and even film. [86]

Image Formation and Diffraction
Electron microscopy samples typically behave as gratings. No electrons are absorbed and most travel through the sample unchanged. A TEM image is formed by wave interference. Wave interference describes the interaction of two or more waves upon meeting. [87] The interference is called constructive when those electron waves are in phase, leading to an increase in amplitude, or destructive when those waves are out of phase, leading to a reduction in amplitude (Figure 4). This can be visualized by shining a laser on a regularly spaced grating (for example a TEM grid) and placing a screen some distance away. The resulting image is called a diffraction pattern, which in the back focal plane of a lens is described by the Fraunhofer grating formula d n sin θ λ ( )= (3) in which λ is the wavelength of the source, d is the spacing in the grating and θ is the scattering angle of the wave from the sample relative to the source direction. The grating formula describes the conditions that need to be met in order to have positive interference. A typical sample will contain all levels of detail (d) rather than just one regular spacing, but lens theory predicts that all parallel scattered waves have to meet in the back focal plane of the lens, causing them to interfere. Combined with the grating formula, this means that, in order to have positive interference, the angle of scattering has to be larger when d is smaller. In the back focal plane, all information of the sample is therefore sorted by frequency (1/d), with the smallest frequencies (i.e., the highest levels of details) ending up furthest out. A diffraction pattern does not contain structural phase information, because in the back focal plane the scattered and unscattered electrons are separated and there is only interference of the scattered waves with each other (Figure 5a). The diffraction pattern is only affected by the coherency of the incoming beam. Therefore, a diffraction pattern from a regularly spaced sample can reach higher resolution than a regular image. In the image plane, the scattered and unscattered information are recombined, resulting in wave interference, but this time of the scattered and the unscattered electrons adding the information of the image plane, i.e., the (registration of) intensity, but also the effects of spherical aberrations in the lenses, focusing and beam-induced drift.

Contrast
Two types of contrast can be distinguished in the electron microscope: scattering and phase contrast. To generate Small 2020, 1906198 Figure 4. a) Constructive interference and b) destructive interference. Parallel scattered waves interact in the diffraction plane (back-focal plane at the focal length of the lens). When two waves meet, their amplitudes are added, the effect of which is determined by their respective phases. Constructive interference results in an intensity peak and occurs when the interfering waves are in phase, which is only true for those scattering angles that allow the difference in path length between the two phases to be nλ (a vs b). D is the (magnified) reciprocal distance in the image between events of positive wave interferences. Smaller details need larger scattering angles and require larger D values for this to be true, which means that in the diffraction plane information from the grating (specimen) is sorted by size with the smallest details furthest out in the back-focal plane. scattering contrast, the electrons scattered at large angles are removed by placing an aperture in the back focal plane of the objective lens. This blocks them from recombining in the image with the unscattered electrons ( Figure 5). A heavy atom, which scatters more strongly, will therefore appear darker in the image compared to a lighter atom (lower atomic number).
Soft-matter samples with low atomic number elements result in small-angle scattering only and, despite the use of an aperture, have little to no contrast in a focused image (Figure 6). In focus, the scattered waves recombine with the waves that did not interact with the sample (Figure 6a,b). Because the amplitude of the scattered component is very small compared to the amplitude of the unscattered beam and their phase difference is at that point ≈0.25λ, there is no constructive or destructive interference, and thus hardly any contrast. Phase contrast is generated by going out of focus. This causes a spread of information and thus blurring of the image, but it also shifts the phases of the scattered electrons increasingly with increasing frequency, creating frequency-dependent interference and thus a defocus-dependent contrast (Figure 6c-e). The spherical aberration (C s ) of the objective lens adds another phase shift to the scattered waves. The resulting image of a weak scattering object has a contrast that is described in Fourier space (Equation (4)). A Fourier transform (FT) is the mathematically calculated diffraction pattern from an electron microscopy image, giving the frequency representation, or power spectrum, of the image (Figure 6f). The difference between actual diffraction and the FT is the second round of wave interference from the image, introducing the structure phases in the image and thus the (phase) contrast transfer function (CTF) in which X is the total phase shift due to spherical aberration and defocus, ν is the frequency (1/d) and Δz is the amount of defocus [nm] that was applied. In practice, images are taken under, and not over focus, because even in a pure phase object, such as soft materials in ice, some scattering contrast exists (Figure 6b). By operating under focus (Figure 6d), both scatter and phase contrast have the same direction in the low frequencies, making the image easier to interpret (compare Figure 6d,e). Because phase contrast (small angle scattering) and scatter contrast (large angle scattering) are not separated, one has to take care when selecting the objective aperture that this aperture size is not too small and starts blocking phase-contrast information. As the information of the sample is sorted by frequency in the back focal plane with the highest resolution features furthest out (Figure 4), a small aperture could reduce the resolution (compare Figure 6f,g).
To summarize, operating under focus causes a phase shift that is larger with higher frequency, which, combined with the effect of the C s , is described in Fourier space by the CTF (Equation (4); Figure 6f). Its equivalent effect in real space is called the point spread function (PSF). [88] The CTF causes the reversal of contrast for certain frequencies and, more severely, loss of information for frequencies that have close to full-and half-wavelength phase shifts. Knowing the C s of the lens, the amount of defocus and the wavelength of the electrons allows the calculation of the CTF and its correction. The CTF of the image can be computationally corrected by Wiener filtering: dividing the FT by a thresholded CTF. First, this makes all contrast unidirectional. Second, it relocates the information that was spread back to the point where it was supposed to arrive from in focus (correcting the PSF). Finally, the filter lifts the amplitudes and thus the contrast of those frequencies that have less than optimal contrast, while taking into account the S/N ratio to avoid enhancement of noise. There is however, no recovery of information that has little or no contrast. As each defocus value has its own PSF and zero crossings, the only way to obtain that missing information and enhance the S/N ratio is by averaging multiple images with different defocus settings.

Phase Plate Technology
Phase contrast was first discovered for phase objects (nonabsorbing samples) in light microscopy by Frits Zernike. [89] The phase plate is composed of a phase object (glass in the case of light microscopy) that changes the waves that pass through Small 2020, 1906198 Figure 5. Scattering contrast in the electron microscope. Because the sample is thin, most electrons will pass through without interacting with the sample (purple lines), some get scattered at different angles (green), and there will be no absorption. In the back-focal plane (F), the scattered electrons interfere with each other and are separated from the unscattered beam. a) The scattered electrons recombine with the beam in the image plane. b) An objective aperture in the back focal plane allows the removal of electrons that were scattered over large angles, making a stronger scattering object appear darker in the image. Figure 6. Phase contrast in the electron microscope. a) When the scattering angles are too small to be removed by an aperture without loss of resolution, b) an in-focus image does not have contrast, because the scattered information recombines with the unscattered beam. c) Out of focus, scattered waves have delayed phases, depending on the scattering angle (and corresponding path length) of the scattered electron and are thus different per frequency. Imaging out of focus therefore not only blurs the image, but it gives a per-frequency phase shift (Equation (4)). The different effects of defocus on contrast are demonstrated by cryo-EM images (Titan Krios, 300 keV, FEG, Ceta-camera) of Doxil, an anticancer drug composed of liposome packaged doxorubicin hydrochloride crystals: b) in focus, d) under focus, and e) over focus. Different frequencies are imaged with different contrast when going under or over focus. f) Calculating the FT of an image shows the signal intensity (contrast) sorted by frequency of the image, clearly demonstrating the effect of the CTF (orange line) and the effect of the temporal and spatial coherence (reduction of signal for higher frequencies). g) While (f) was taken with an aperture 100 µm, taking the same image with an excessively small 30 µm aperture shows loss of high-frequency information (outer rings in FT). Alternatively to defocusing, a phase plate can be used. In light microscopy, it changes the interference of the scattered and unscattered beam in the image plane to destructive by shifting the scattered waves by, optimally, 1/4 λ, h) while the unscattered beam goes through the hole in the center unchanged. i) In electron microscopy, the so-called Volta or hole-free phase plates can yield a similar effect without a hole. They use the charging effect of the beam on the carbon, which means that the shift builds up over time. [92] it by 0.25λ. It has a small hole in the center and is placed in the backfocal plane of the objective lens (Figure 6h). The unscattered beam is focused by the lens to travel through the focal point, i.e., the hole of the phase plate, while all scattered information is sorted by the scattering angle and therefore travels through the phase plate and obtains a 0.25λ phase shift, making the total difference in phase between scattered and unscattered light 0.5λ. When the original unchanged beam and the phase-shifted waves meet in the image plane, their interference is destructive and hence the image of an object that scatters stronger will appear darker (Figure 6h).
A usable phase plate for the electron microscope became available only this century, [90][91][92] mainly due to difficulties with scattering and radiation damage. The Volta phase plate, or hole free phase plate, is comprised of a continuous carbon film, which is slowly charged by the beam. This creates a potential that is stronger after longer usage times, inducing phase shifts that increase concomitantly. In practice, phase-shifts between 0.2π and 0.8π are useful and around 50 images can be taken before moving to a new location on the phase plate. Imaging exactly in focus, however, is time consuming (due to the need for repeated focusing to reach this) and the corresponding lack of CTF implies that the maximal achievable resolution is limited by the spherical aberration of the objective lens, something that can be corrected in a traditional out-of-focus image (Equation (4)). Therefore, the phase plate is often combined with a very small amount of defocus, giving optimal contrast at minimal loss of information. [93] Especially for small proteins (<50 kDa) and for tomography a phase plate can be very useful to gain sufficient contrast. [91,94,95] When used properly, phase plates provide important improvement [93,96] and more and more TEMs are being equipped with them. Although phase plates have, to the best of our knowledge, not found their way to the field of soft-matter chemistry yet, they could prove extremely valuable. [97] Data processing to enhance contrast is often not possible due to the heterogeneity in the self-assembly materials, and in these cases low-scattering materials like vesicles (Figure 6i) or micelles can be imaged with much more contrast by making use of a phase plate.

Combining Images and 3D Reconstructions of Objects
Although one electron microscopy image could be sufficient to answer many research questions, image processing allows us to do per-frequency modifications, enhance the S/N and/or obtain 3D information.
Fourier transforms allow the sorting of all information in an image by frequency and the targeting of specific levels of detail before recalculating the image. This is extremely useful when analyzing single images, for example for removal of noise frequencies when the maximum level of detail is known (low-pass filtering) or when the maximum particle size is known (highpass filtering). [98][99][100] Two main approaches exist for reducing the S/N ratio and obtaining 3D information: single particle analysis (SPA) and tomography. The latter might also be followed by sub-tomographic averaging.

3D Reconstruction by Single Particle Analysis
Single TEM images contain noise, due to the limitations that radiation damage poses to the dose that can be used for imaging. To improve the S/N, SPA combines many images of many identical (or highly similar) objects. The misleading term "Single" is a historical reference to the fact that the particles are not in a crystal-like packing, but freely dispersed (e.g., purified proteins). However, nowadays 2D (pseudo-) crystals, helices and tubular structures can also be processed by SPA approaches. SPA serves the purposes of noise elimination and the study of common features, which can be achieved by superimposition as was discovered for the photographic case in 1879. [101] It was first used in electron microscopy in 1963 making use of the internal symmetry of virus particles. [102] The presence of regular structures in a sample allows processing by SPA in one (e.g., [103][104][105][106] ), two (e.g., [54] ) or three dimensions (e.g., [107,108] ). SPA in one dimension requires the fewest images and creates a radial density profile by averaging pixels over the length of, for example, a tube, allowing a closer look at, for example, tube thickness by enhancing the contrast profile in the tube. This will give a better view of the Fresnel fringes and thus more accurate and reproducible distance measuring without the need for CTF correction (e.g., [103][104][105] ). If possible, different views of the object are combined into 3D models of the sample (e.g., ref. [107,108]). The processing pipeline typically required for 2D and full 3D analyses starts with motion correction of the recorded movies (DDD camera only) and CTF correction. Following this, particles, or rather small subimages representing them, are selected either manually or computationally from each microscopy image and sorted into groups based on similarity, providing several 2D averages of multiple images, each which have an improved S/N. These can be used for cleaning the set of particles from, for example, groups of particles that are damaged but are sometimes sufficient to answer the research questions already. When desired, this dataset is then further grouped into one or multiple 3D models, determining the angles, shifts and rotations that the images have with respect to each other computationally. More detailed information on SPA can be found in the following references. [28,30,[109][110][111][112][113][114][115][116][117] SPA allows a combined total electron dose that far exceeds the tolerance of a single object and can therefore reach very high resolutions (Å range).

Tomography
Tomography, in contrast to SPA, is based on recording multiple images of an individual (single) object under different angles and combining these images to reconstructing a 3D volume, the tomogram. [118][119][120] 3D information is extremely valuable. The recording and analysis of a single tomogram requires much less time than 3D SPA and allows the exploration of a larger volume in the order of typically ≈200-600 nm thickness and of a range of x/y dimensions (and hence varying pixel sizes and concomitant level of detail depending on the chosen magnification). Reconstruction is based on the principle of back projection. [14] The sample is tilted in small increments and at each angle an image is recorded, taking care that the total dose that the sample receives is not too high to cause radiation damage.
From these images, and knowing the respective tilt angles, the 3D volume is reconstructed. Tomography therefore allows the study of structures in their context, and it does not demand the presence of regularity to obtain 3D information. When extended by sub-tomographic averaging, i.e., combining subvolumes from reoccurring objects in the tomograms, the S/N ratio and thus the visible resolution can be further improved. This is analogous to SPA, but starting from 3D rather than 2D objects. Furthermore, the tomograms allow the improved structures to be placed back into their original orientation and context to study their interactions with the surrounding sample. Data acquisition in tomography limits the dose per image to allow multiple images of the same area. Another limit to tomography is the maximal tilt range, which is in the order of −70° to 70° instead of −90° to 90° due to the sample holder geometry, leading to a missing wedge of information. Besides the higher noise level of tomograms, limiting the final resolution (nm scale) in sub-tomographic averaging are the relatively thick samples and the higher computational demands and workload per tomogram if sub-volume averaging is required compared to single images as they are used in SPA. In soft matter chemistry tomography is especially useful to study more complex assemblies. [121,122] It distinguishes between structures laying on top of each other and structures sitting inside of each other and allows the characterization of the respective interfaces. [123][124][125] Which 3D technique is best suited typically depends on the sample and on the research questions. SPA requires a purified sample with high homogeneity and can provide better resolutions, while tomography results are noisier, but do not require purification or homogeneity, making this technique extremely useful for soft-matter chemists as a tool to obtain 3D information. Both approaches can also be combined, for example, by using tomography to get a starting model and wider context of a structure while SPA can deliver the highest obtainable structural detail.

Sample Preparation
The harsh conditions of high vacuum and strong radiation affect the way samples have to be prepared. This needs to be taken into account regarding image interpretation as the preparation procedures often result in loss of the natural state of the sample. Dry samples containing heavy metals, for example, are less susceptible to radiation damage and give good contrast. Hard materials are often independent of solvents and therefore suitable for TEM by drying. [126,127] Organic and soft matter materials however, need preservation. [33,128] The selection of a suitable sample preparation method needs to balance the choices of contrast (addition of heavy metals: yes or no), achievable resolution and native state versus more artificial preservation.
Soft-matter chemistry samples are currently most commonly studied by drying, negative staining and/or cryo-fixation. [33] However, drying is highly unsuitable for soft materials. [33,128,129] It can cause strong artifacts, aggregation, deformation, and sometimes complete reorganization. For newly designed materials, the distinction between artifact and real structure can often only be shown by comparing images of differently prepared samples, which makes the dried sample dispensable. Drying is only useful to test the stability of an assembly against drying, again by comparing it to other preparations. [33,56,59,130] While polymers can be stable enough to withstand the treatment of drying, this is not always the case. Especially, polymer systems that rely on water, solvent, or mixtures thereof, like, for example, amphiphilic block-copolymers, could undergo significant changes upon drying, and are therefore not suitable to dry. [128] Staining is a better choice, as it often concomitantly fixes the sample shape, enhances contrast and protects somewhat against dehydration and radiation damage. A positive heavy metal stain is achieved by adding metals that adhere to specific parts of the sample. A negative stain is a fine-grained metal solution that lightly covers and surrounds the sample, creating a cast or footprint-like image of the objects. [10] This results in light objects of interest outlined by a dark halo from the heavy metal stain. The disadvantages of such preparations are: i) the image represents the stain-excluding surface, obscuring internal features; ii) staining requires a sample support, limiting the visible orientations; iii) the grain size of the stain limits the maximum achievable resolution; and iv) the specimens undergo significant dehydration.
By far the best preservation technique is cryo-fixation. For this, typically a drop of sample is applied to a holey carbon or gold-coated grid and blotted to leave a thin layer of solution in the holes of the grid. Then the grid is plunge-frozen in liquid ethane or ethane/propane. This freezing step is so fast that it produces vitreous (glass-like) ice, avoiding the formation of icecrystals which can damage and obscure the sample. [131,132] The specimen thickness will affect cryopreservation. While at the specimen surface it will be good, it might be poor in the center. Only if the sample is thin (significantly less than ≈10 µm for plunge-frozen specimens and less than 300 µm if high-pressure frozen), cryo-immobilization can produce true vitreous ice throughout the sample. [22] Cryo-EM of vitrified soft matter specimens gives low contrast and typically relies on phase contrast only. [34,133] Still, it offers the visualization of the sample in its most native state and is the best method to reach relevant high resolution in both imaging and image processing.

Cryo-FIB Milling
TEM on soft matter is limited to a sample thickness in the order of ≈300-800 nm, depending on the acceleration voltage. Beyond this, the image will appear more or less black as too few electrons can penetrate the specimen to create an image. Thicker materials can be locally micromachined into thin lamellae with a focused ion beam (FIB milling). However, specimens sensitive to the ion or electron beam, as well as samples lacking rigidity and/or containing or relying on water, e.g., polymers, lipids, organic compounds and biological materials, samples that are (glass-transition) temperature dependent, e.g., (block-co)polymers, and samples that react with gallium, e.g., semiconductors, are better not to be FIB-milled at room temperature as they suffer to much from that preparation technique. [134][135][136][137] Having established that for these soft or mixed materials cryo-freezing is the preferred preparation technique, Small 2020, 1906198 corresponding cryo-compatible thinning methods for thick samples have been developed.
At the beginning of this century, tomography on vitreous sections became possible. [138][139][140] These sections are cut by cryo-ultramicrotomy, analogous to traditional plastic embedded sections. [141] Unfortunately, the technical difficulty, slow throughput, as well as artifacts, such as knife-marks, deformation and preferential fracture of heterogeneous materials pose serious limitations to this technique. [135] Even more recently, solutions were found to create thinned areas within cryo-frozen specimens by FIB milling. [142][143][144][145] Cryo-FIB milling is easier than cryo-ultramicrotomy, is better at targeting and creates fewer artifacts. In short, the cryofrozen sample is tilted and the extra material is ablated under a shallow angle, usually with gallium ions. Various geometries are established, including wedges or very thin lamellae that can be imaged and studied by cryo-TEM after transfer (Figure 7). This process is most easily executed in dedicated dual-beam cryo-FIB-SEM microscopes, which allow the user to perform all the steps including (fluorescence signal based correlative) targeting, scanning electron microscopy (SEM) imaging, sputter coating, platinum gas injection system (GIS) deposition and milling within one machine. This reduces unnecessary transfers and handling steps that provide risk to the delicate cryo-frozen specimen. Unified holder designs (minimal cartridge system) between the cryo-FIB-SEMs and transmission electron cryo-microscopes facilitate that workflow.
Targeting of the milling process can be based on SEM imaging inside the microscope or based on, for example, fluorescence images made prior to loading the sample. SEM imaging is kept brief to avoid radiation damage to the sample. If more intense imaging is desired, the sample can be sputtercoated with a protective platinum layer that is conductive to the electron beam and protects the sample somewhat against beam damage. Prior to milling, to protect the sample against unwanted ion-beam erosion as well as to minimize "curtaining" artifacts (uneven vertical striation as a result of uneven milling speeds or behind more FIB-resistant material in the sample), the sample is coated with an edging material, a thin metal layer (usually platinum) by means of a GIS. [134,135] This Small 2020, 1906198 Figure 7. Cryo-FIB workflow. a) After selecting an area of interest with the scanning electron microscope, in this case an adenovirus infected human lung cell, a pattern is drawn to direct b) the gallium ion beam and mill away the cellular material until a thin lamella is left (c: SEM and d: ion beam) (Aquilos Cryo-FIB, Thermo Fisher Scientific). e,f) The sample is then transferred into a TEM. e) It can be observed that only the lamella is transmitting electrons, whereas the remainder of the cell is too thick and therefore very dark. f) A higher magnification image provides insight in the cell, showing here a multilamellar body (Titan Krios, 300 keV, FEG, Gatan K2 Detector).
nonconductive layer protects the milling edge and evens the milling speed in such a way that the surface of the lamella remains smooth. After the final milling step, the lamella is sputter-coated with a very thin (≈2 nm) layer of conductive material, for example platinum. This final layer helps with dissipating the charge that builds up in the sample as a consequence of the electron beam, reducing charging and beaminduced motion, improving the beam tolerance of the lamella enough to allow tomography.
Gels, matrices and other more elaborate soft materials, as well as combined systems with hard and soft materials are hard to freeze into a thin layer and remain typically too thick for TEM. They are therefore not often targeted with cryo-EM. Applying cryo-FIB, possibly in combination with high-pressure freezing would allow insight into their structure and can supply a wealth of information. Currently still rare, but expected to widen the applications of cryo-FIB even further, is the cryo-FIB lift-out technique, which allows a lamella to be prepared from a thicker sample by picking a desired area from a bulky sample. [106,[146][147][148]

Summary and Outlook
Electron microscopy plays a significant role in soft matter chemistry for the characterization of new molecules, yet it is a technique currently only occasionally applied in this field and mostly by collaborations. This review provides a technical background about the technique, as well as the recent developments in the fields of structural and cell biology, where cryo-EM and related technical developments are revolutionizing entire fields of science. A better understanding of what is possible with cryo-EM and insight into the choices and decisions that are taken in data acquisition, and what is needed in order to get the most out of one's data will assist soft-matter chemists with quality assessment and pushing their electron microscopy to the forefront of science. With this overview, we hope to inspire more exchange between scientists of the soft-matter chemistry field and microscopists specialized in structural and cell biology, who are leading experts in soft-matter sample preparations and electron microscopy imaging techniques.
Especially, studies focusing on molecules that for their structure, function, and behavior rely on a particular medium environment have a lot to gain from cryo-EM. Examples of such samples are amphiphiles and amphiphilic block-copolymers and all other molecules that are designed to self-assemble or aggregate into specific structures. The field of soft-matter chemistry needs to stop accepting samples prepared by drying for subsequent EM studies and to make cryo-EM the primary technique of choice. This will enable it to step up to a new level. Currently, it seems that the investment into a thorough analysis by cryo-EM is sometimes considered too expensive, too difficult, too time consuming or even inaccessible. However, the advances in the field of biology have driven the development of national facilities, and access to more advanced equipment is more readily available. A single cryo-TEM image holds the power to demonstrate directly the products of self-assembly. Furthermore, implementing the above described theoretical knowledge of image formation and contrast formation allows to better interpret the obtained image data. Moreover, recent developments like cryo-FIB milling have opened the door to a much wider range of samples, like larger products of selfassembly that hitherto could not be imaged directly because of their sheer size. Phase-plate technology allows imaging with such high contrast, that the lack of contrast of phase objects is no longer an issue either. Adding to that the possibilities of data processing and tomography for 3D analysis, molecular systems and their capabilities can now be characterized comprehensively. The increasing complexity of soft-matter systems will demand that the field makes use of these developments and techniques. Starting to adopt these fast will undoubtedly allow a tremendous advance of the field of soft-matter chemistry.