Geometry-complete diffusion for 3D molecule generation and optimization

Generative deep learning methods have recently been proposed for generating 3D molecules using equivariant graph neural networks (GNNs) within a denoising diffusion framework. However, such methods are unable to learn important geometric properties of 3D molecules, as they adopt molecule-agnostic and non-geometric GNNs as their 3D graph denoising networks, which notably hinders their ability to generate valid large 3D molecules. In this work, we address these gaps by introducing the Geometry-Complete Diffusion Model (GCDM) for 3D molecule generation, which outperforms existing 3D molecular diffusion models by significant margins across conditional and unconditional settings for the QM9 dataset and the larger GEOM-Drugs dataset, respectively. Importantly, we demonstrate that GCDM’s generative denoising process enables the model to generate a significant proportion of valid and energetically-stable large molecules at the scale of GEOM-Drugs, whereas previous methods fail to do so with the features they learn. Additionally, we show that extensions of GCDM can not only effectively design 3D molecules for specific protein pockets but can be repurposed to consistently optimize the geometry and chemical composition of existing 3D molecules for molecular stability and property specificity, demonstrating new versatility of molecular diffusion models. Code and data are freely available on GitHub.

Fig. 1: A framework overview of the proposed Geometry-Complete Diffusion Model (GCDM) for geometric and chirality-aware 3D molecule generation.The framework consists of (i.) a graph (topology) definition process; (ii.) a GCPNet-based graph neural network for SE(3)-equivariant graph representation learning; (iii.)denoising of 3D input graphs using GCPNet++; and (iv.) application of a trained GCPNet++ denoising network for 3D molecule generation.Zoom in for the best viewing experience.

Introduction
Generative modeling has recently been experiencing a renaissance in modeling efforts driven largely by denoising diffusion probabilistic models (DDPMs).At a high level, DDPMs are trained by learning how to denoise a noisy version of an input example.For example, in the context of computer vision, Gaussian noise may be successively added to an input image with the goals of a DDPM in mind.We would then desire for a generative model of images to learn how to successfully distinguish between the original input image's feature signal and the noise added to the image thereafter.If a model can achieve such outcomes, we can use the model to generate novel images by first sampling multivariate Gaussian noise and then iteratively removing, from the current state of the image, the noise predicted by the model.This classic formulation of DDPMs has achieved significant results in the space of image generation [1], audio synthesis [2], and even meta-learning by learning how to conditionally generate neural network checkpoints [3].Furthermore, such an approach to generative modeling has expanded its reach to encompass scientific disciplines such as computational biology [4][5][6][7][8], computational chemistry [9][10][11], and computational physics [12].
Concurrently, the field of geometric deep learning (GDL) [13] has seen a sizeable increase in research interest lately, driven largely by theoretical advances within the discipline [14] as well as by novel applications of such methodology [15][16][17][18].Notably, such applications even include what is considered by many researchers to be a solution to the problem of predicting 3D protein structures from their corresponding amino acid sequences [19].Such an outcome arose, in part, from recent advances in sequencebased language modeling efforts [20,21] as well as from innovations in equivariant neural network modeling [22].
However, it is currently unclear how the expressiveness of geometric neural networks impacts the ability of generative methods that incorporate them to faithfully model a geometric data distribution.In addition, it is currently unknown whether diffusion models for 3D molecules can be repurposed for important, real-world tasks without retraining or fine-tuning and whether geometric diffusion models are better equipped for such tasks.Toward this end, in this work, we provide the following findings.
• Neural networks that perform message-passing with geometric quantities enable diffusion generative models of 3D molecules to generate valid and energeticallystable large molecules whereas non-geometric message-passing networks fail to do so, where we introduce key computational metrics to enable such findings.• Physical inductive biases such as invariant graph attention and molecular chirality both play important roles in diffusion-generating valid 3D molecules.• Our newly-proposed Geometry-Complete Diffusion Model (GCDM), which is the first diffusion model to incorporate the above insights and achieve the ideal type of equivariance for 3D molecule generation (i.e., SE(3) equivariance), establishes new state-of-the-art (SOTA) results for conditional 3D molecule generation on the QM9 dataset as well as for unconditional molecule generation on the GEOM-Drugs dataset of large 3D molecules, for the latter more than doubling PoseBusters validity rates; generates more unique and novel small molecules for unconditional generation on the QM9 dataset; and achieves better Vina energy scores and more than twofold higher PoseBusters validity rates [23] for protein-conditioned 3D molecule generation.• We further demonstrate that geometric diffusion models such as GCDM can consistently perform 3D molecule optimization for molecular stability as well as for specific molecular properties without requiring any retraining and can consistently do so whereas non-geometric diffusion models cannot.

Unconditional 3D Molecule Generation -QM9
The first dataset used in our experiments, the QM9 dataset [24], contains molecular properties and 3D atom coordinates for 130k small molecules.Each molecule in QM9 can contain up to 29 atoms after hydrogen atoms are imputed for each molecule following dataset postprocessing as in Hoogeboom et al. [25].Metrics.We measure each method's average negative log-likelihood (NLL) over the corresponding test dataset, for methods that report this quantity.Intuitively, a method achieving a lower test NLL compared to other methods indicates that the method can more accurately predict denoised pairings of atom types and coordinates for unseen data, implying that it has fit the underlying data distribution more precisely than other methods.In terms of molecule-specific metrics, we adopt the scoring conventions of Satorras et al. [27] by using the distance between atom pairs and their respective atom types to predict bond types (single, double, triple, or none) for all but one baseline method (i.e., E-NF).Subsequently, we measure the proportion of generated atoms that have the right valency (atom stability -AS) and the proportion of generated molecules for which all atoms are stable (molecule stability -MS).To offer additional insights into each method's behavior for 3D molecule generation, we also report the validity (Val) of the generated molecules as determined by RDKit [28], the uniqueness of the generated molecules overall (Uniq), and whether the generated molecules pass each of the de novo chemical and structural validity tests (i.e., sanitizable, all atoms connected, valid bond lengths and angles, no internal steric clashes, flat aromatic rings and double bonds, low internal energy, correct valence, and kekulizable) proposed in the PoseBusters software suite [23] and adopted by recent works on molecule generation tasks [29,30].Each method's results in the top half (bottom half) of Table 1 are reported as the mean and standard deviation (mean and Student's tdistribution 95% confidence error intervals) (±) of each metric across three (five) test runs on QM9, respectively.
Baselines.Besides including a reference point for molecule quality metrics using QM9 itself (i.e., Data), we compare GCDM (a geometry-complete DDPM -i.e., GC-DDPM) to 10 baseline models for 3D molecule generation, each trained and tested using the same corresponding QM9 splits for fair comparisons: G-Schnet [31]; Equivariant Normalizing Flows (E-NF) [27]; Graph Diffusion Models (GDM) [25] and their variations (i.e., GCM-aug); Equivariant Diffusion Models (EDM) [25]; Bridge and Bridge + Force [32]; latent diffusion models (LDMs) such as GraphLDM and its variation GraphLDM-aug [33]; as well as the state-of-the-art GeoLDM method [33].Note that we specifically include these baselines as representative implicit bond prediction methods for which bonds are inferred using their generated molecules' atom types and inter-atom distances, in contrast to explicit bond prediction approaches such as those of [34] and [35] for fair comparisons with our method.For each of such baseline methods, we report their results as curated by Wu et al. [32] and Xu et al. [33].We further include two GCDM ablation models to more closely analyze the impact of certain key model components within GCDM.These two ablation models include GCDM without chiral and geometry-complete local frames F ij (i.e., GCDM w/o Frames) and GCDM without scalar message attention (SMA) applied to each edge message (i.e., GCDM w/o SMA).In Section 3 as well as Appendices B.1 and C, we further discuss GCDM's design, hyperparameters, and optimization with these model configurations.
Results.In the top half of    with lower standard deviations.In the bottom half of Table 1, , where we reevaluate GCDM and GeoLDM using 5 sampling runs and report 95% confidence intervals for each metric, GCDM generates 1.6% more RDKit-valid and unique molecules and 5.2% more novel molecules compared to GeoLDM, all while offering the best reported negative log-likelihood (NLL) for the QM9 test dataset.This result indicates that although GeoLDM offers novelty rates close to parity (i.e., 50%), GCDM nearly matches the stability and PB-validity rates of GeoLDM while yielding novel molecules nearly 60% of the time on average, suggesting that GCDM may prove more useful for accurately exploring the space of novel yet valid small molecules.Our ablation of SMA within GCDM demonstrates that, to generate stable 3D molecules, GCDM heavily relies on both being able to perform a lightweight version of fullyconnected graph self-attention [20], which suggests avenues of future research that will be required to scale up such generative models to large biomolecules such as proteins.Additionally, removing geometric local frame embeddings from GCDM reveals that the inductive biases of molecular chirality and geometry-completeness are important contributing factors in GCDM achieving these SOTA results.

Property-Conditional 3D Molecule Generation -QM9
Baselines.Towards the practical use case of conditional generation of 3D molecules, we compare GCDM to existing E(3)-equivariant models, EDM [25] and GeoLDM [33], as well as to two naive baselines: "Naive (Upper-bound)" where a molecular property classifier ϕ c predicts molecular properties given a method's generated 3D molecules and shuffled (i.e., random) property labels; and "# Atoms" where one uses the numbers of atoms in a method's generated 3D molecules to predict their molecular properties.For each baseline method, we report its mean absolute error (MAE) in terms of molecular property prediction by an ensemble of three EGNN classifiers ϕ c [36] as reported in Hoogeboom et al. [25].For GCDM, we train each conditional model by conditioning it on one of six distinct molecular property feature inputs -α, gap, homo, lumo, µ, and C v -for approximately 1,500 epochs using the QM9 validation split of Hoogeboom et al. [25] as the model's training dataset and the QM9 training split of Hoogeboom et al. [25] as the corresponding EGNN classifier ensemble's training dataset.Consequently, one can expect the gap between a method's performance and that of "QM9 (Lower-bound)" to decrease as the method more accurately generates property-specific molecules.Fig. 3: PB-valid 3D molecules generated by GCDM using increasing values of α.
Task  The results in the bottom half of the table (where GeoLDM is retrained using its official code repository due to the unavailability of its conditional model checkpoints) are likewise listed for selected methods yet instead report (across an ensemble of three separately-trained EGNN property classifier models, each with a distinct random seed) Student's t-distribution 95% confidence error intervals for each property metric as well as the percentage of PoseBusters-validated (PB-Valid) de novo generated molecules.The top-1 (best) conditioning results for this task are in bold, and the second-best results are underlined.

Unconditional 3D Molecule Generation -GEOM-Drugs
The second dataset used in our experiments, the GEOM-Drugs dataset, is a well-known source of large, 3D molecular conformers for downstream machine learning tasks.It contains 430k molecules, each with 44 atoms on average and with up to as many as 181 atoms after hydrogen atoms are imputed for each molecule following dataset postprocessing as in Hoogeboom et al. [25].For this experiment, we collect the 30 lowest-energy conformers corresponding to a molecule and task each baseline method with generating new molecules with 3D positions and types for each constituent atom.
Here, we also adopt the negative log-likelihood, atom stability, and molecule stability metrics as defined in Section 2.1 and train GCDM using the same hyperparameters as listed in Appendix C.2, with the exception of training for approximately 75 epochs on GEOM-Drugs.Baselines.In this experiment, we compare GCDM to several state-of-the-art baseline methods for 3D molecule generation on GEOM-Drugs.Similar to our experiments on QM9, in addition to including a reference point for molecule quality metrics using GEOM-Drugs itself (i.e., Data), here we also compare against E-NF, GDM, GDM-aug, EDM, Bridge along with its variant Bridge + Force, as well as GraphLDM, GraphLDM-aug, and GeoLDM.As in Section 2.1 , each method's results in the top half (bottom half) of the table are reported as the mean and standard deviation (mean and Student's t-distribution 95% confidence interval) (±) of each metric across three (five) test runs on GEOM-Drugs.
Results.To start, Table 3 displays an interesting phenomenon that is important to note: Due to the size and atomic complexity of GEOM-Drugs' molecules and the subsequent errors accumulated when estimating bond types based on such inter-atom distances, the baseline results for the molecule stability metrics measured here (i.e., Data) are much lower than those collected for the QM9 dataset.Thus, reporting additional chemical and structural validity metrics (e.g., PB-Valid) for comparison is crucial to accurately assess a method's performance in this context, which we do in the bottom half of Table 3. Nonetheless, for GEOM-Drugs, GCDM supersedes EDM's SOTA negative log-likelihood results by 57% and advances GeoLDM's SOTA atom and molecule stability results by 4% and more than sixfold, respectively.More importantly, however, GCDM can generate a significant proportion of PB-valid large molecules, surpassing even the reference molecule stability rate of the GEOM-Drugs dataset (i.e., 2.8) by 54%, demonstrating that geometric diffusion models such as GCDM can not only effectively generate valid large molecules but can also generalize beyond the native distribution of stable molecules within GEOM-Drugs.
Figure 4 illustrates PoseBusters-valid examples of large molecules generated by GCDM at the scale of GEOM-Drugs, with the following corresponding SMILES strings from left to right: As an example of the notion that GCDM produces low energy structures for a generated molecular graph, the free energies for Figures 4 (a) and (f) were computed to be -3 kcal/mol and -2 kcal/mol, respectively, using CREST [39] at the GFN2-xTB level of theory Fig. 5: A comparison of the energy ratios [23] of 10,000 large 3D molecules generated by GCDM and GeoLDM, a baseline state-of-the-art method.Employing Student's tdistribution 95% confidence intervals, GCDM achieves a mean energy ratio of 2.98 ± 0.13, whereas GeoLDM yields a mean energy ratio of 4.19 ± 0.09.
(which matches the corresponding free energy distribution mean for the GEOM-Drugs dataset (-2.5 kcal/mol) as illustrated in Figure 2 of [40] ).Lastly, to detect whether a method, in aggregate, generates molecules with unlikely 3D conformations, a generated molecule's energy ratio is defined as in Buttenschoen et al. [23] to be the ratio of the molecule's UFF-computed energy [41] and the mean of 50 RDKit ETKDGv3-generated conformers [42] of the same molecular graph.Note that, as discussed by Wills et al. [43], generated molecules with an energy ratio greater than 7 are considered to have highly unlikely 3D conformations.Subsequently, Figure 5 reveals that the average energy ratio of GCDM's large 3D molecules is notably lower and more tightly bounded compared to GeoLDM, the baseline SOTA method for this task, indicating that GCDM also generates more energetically-stable 3D molecule conformations compared to prior methods.

Property-Guided 3D Molecule Optimization -QM9
To evaluate whether molecular diffusion models can not only generate new 3D molecules but can also optimize existing small molecules using molecular property guidance, we adopt the QM9 dataset for the following experiment.First, we use an unconditional GCDM model to generate 1,000 3D molecules using 10 time steps of time-scaled reverse diffusion (to leave such molecules in an unoptimized state), and then we provide these molecules to a separate property-conditional diffusion model for optimization of the molecules towards the conditional model's respective property.This conditional model accepts these 3D molecules as intermediate states for 100 and 250 time steps of property-guided optimization of the molecules' atom types and 3D coordinates.Lastly, we repurpose our experimental setup from Section 2.2 to score these optimized molecules using an ensemble of external property classifier models to evaluate (1) how much the optimized molecules' predicted property values have been improved for the respective property (first metric) and (2) whether and how much the optimized molecules' stability (as defined in Section 2.1) has been changed during optimization (second metric).
Baselines.Baseline methods for this experiment include EDM [25] and GCDM, where both methods use similar experimental setups for evaluation.Our baseline methods also include property-specificity and molecule stability measures of the initial (unconditional) 3D molecules to demonstrate how much molecular diffusion models can modify or improve these existing 3D molecules in terms of how property-specific and stable they are.As in Section 2.2, property specificity is measured in terms of the corresponding property classifier's MAE for a given molecule with a targeted property value, reporting the mean and Student's t-distribution 95% confidence interval for each property MAE across an ensemble of three corresponding classifiers.Molecular stability (i.e., Mol Stable (%)), here abbreviated at M S, is defined as in Section 2.1.
Results. Figure 6 showcases a practical finding: geometric diffusion models such as GCDM can effectively be repurposed as 3D molecule optimization methods with minimal modifications, improving both a molecule's stability and property specificity.This finding empirically supports the idea that molecular denoising diffusion models approximate the Boltzmann distribution with the score function they learn [44] and therefore may be applied in the optimization stage of the typical drug discovery pipeline [45] to experiment with a wider range of potential drug candidates (post-optimization) more quickly than previously possible.Simultaneously, the baseline EDM method fails to consistently optimize the stability and property specificity of existing 3D molecules, which suggests that geometric methods such as GCDM are theoretically and empirically better suited for such tasks.Notably, on average, with 100 time steps GCDM improves the stability of the initial molecules by over 25% and their specificity for each molecular property by over 27%, whereas for the properties it can optimize with 100 time steps, EDM improves the stability of the molecules by 13% and their property specificity by 15%.Lastly, it is worth noting that increasing the number of optimization time steps from 100 to 250 steps inconsistently leads to further improvements to molecules' stability and property specificity, indicating that the optimization trajectory likely reaches a local minimum around 100 time steps and hence rationalizes reducing the required compute time for optimizing 1,000 molecules e.g., from 15 minutes (for 250 steps) to 5 minutes (for 100 steps).

Protein-Conditional 3D Molecule Generation
To investigate whether geometry-complete methods can enhance the ability of molecular diffusion models to generate 3D models within a given protein pocket (i.e., to perform structure-based drug design (SBDD)), in this experiment, we adopt the standard Binding MOAD (BM) [46] and CrossDocked (CD) [47] datasets for training and evaluation of GCDM-SBDD, our geometry-complete, diffusion generative model based on GCPNet++ that extends the diffusion framework of Schneuing et al. [48] for protein pocket-aware molecule generation.The Binding MOAD dataset consists of 100,000 high-quality protein-ligand complexes for training and 130 proteins for testing, with a 30% sequence identity threshold being used to define this cross-validation split.Similarly, the CrossDocked dataset contains 40,484 high-quality protein-ligand complexes split between training (40,354) and test (100) partitions using proteins' enzyme commission numbers as described by Schneuing et al. [48].
Baselines.Baseline methods for this experiment include DiffSBDD-cond [48] and DiffSBDD-joint [48].We compare these methods to our proposed geometrycomplete protein-aware diffusion model, GCDM-SBDD, using metrics that assess the properties, and thereby the quality, of each method's generated molecules.These  4: Evaluation of generated molecules for target protein pockets from the Binding MOAD (BM) and CrossDocked (CD) test datasets.Our proposed method, GCDM-SBDD, achieves the best results for the metrics listed in bold and the second-best results for the metrics underlined.For each metric, a method's mean and Student's t-distribution 95% confidence error interval (±) is reported over 100 generated molecules for each test pocket.Additionally, the PoseBusters validity (PB-Valid) metric is defined as the percentage of generated molecules that pass all dockingrelevant structural and chemical sanity checks proposed by [23], with the validity ratio to the left (right) of each / denoting the percentage of valid molecules without (with) consideration of protein-ligand steric clashes.
molecule-averaged metrics include a method's average Vina score (computed using QuickVina 2.1) [49] as a physics-based estimate of a ligand's estimated binding affinity with a target protein, measured in units of kcal/mol (lower is better); average drug likeliness QED [50] (computed using RDKit 2022.03.2); average synthesizability [51] (computed using the procedure introduced by [52]) as an increasing measure of the ease of synthesizing a given molecule (higher is better); on average how many rules of Lipinski's rule of five are satisfied by a ligand [53] (computed compositionally using RDKit 2022.03.2); and average diversity in mean pairwise Tanimoto distances [54,55] (derived manually using fingerprints and Tanimoto similarities computed by RDKit 2022.03.2).Following established conventions for 3D molecule generation [25], the size of each ligand to generate was determined using the ligand size distribution of the respective training dataset.Note that, in this context, "joint" and "cond" configurations represent generating a molecule for a protein target, respectively, with and without also modifying the coordinates of the binding pocket within the protein target.Also note that, similar to our experiments in Sections 2.1 -2.4, the GCDM-SBDD model uses 9 GCP message-passing layers along with 256 (64) and 32 ( 16) invariant (equivariant) node and edge features, respectively.Results.Table 4 shows that, across both of the standard SBDD datasets (i.e., Binding MOAD and CrossDocked), GCDM-SBDD generates more clash-free (PB-Valid) and lower energy (Vina) molecules compared to prior methods.Moreover, across all other metrics, GCDM-SBDD achieves comparable or better results in terms of drug-likeness measures (e.g., QED) and comparable results for all other molecule metrics without performing any hyperparameter tuning due to compute constraints.These results suggest that GCDM, with GCPNet++ as its denoising neural network, not only works well for de novo 3D molecule generation but also protein target-specific 3D molecule generation, notably expanding the number of real-world application areas of GCDM.Concretely, GCDM-SBDD improves upon DiffSBDD's average Vina energy scores by 8% on average across both datasets while generating more than twice as many PB-valid "candidate" molecules for the more challenging Binding MOAD dataset.
As suggested by [23], the gap between the PB-Valid ratios in Table 4 without and with protein-ligand steric clashes considered for both GCDM-SBDD and DiffSBDD suggests that deep learning-based drug design methods for targeted protein pockets can likely benefit significantly from interaction-aware molecular dynamics relaxation following protein-conditional molecule generation, which may allow for many generated "candidate" molecules to have their PB validity "recovered" by such relaxation.Nonetheless, Figure 7 demonstrates that GCDM can consistently generate clash-free realistic and diverse 3D molecules with low Vina energies for unseen protein targets.

Problem Setting
In this work, our goal is to generate new 3D molecules either unconditionally or conditioned on user-specified properties.We represent a molecular point cloud (e.g., 3D molecule) as a fully-connected 3D graph G = (V, E) with V and E representing the graph's sets of nodes and edges, respectively, and N = |V| and E = |E| representing the numbers of nodes and edges in the graph, accordingly.In addition, X = (x 1 , x 2 , ..., x N ) ∈ R N ×3 represents the respective Cartesian coordinates for each node (i.e., atom).Each node in G is described by scalar features H ∈ R N ×h and m vector-valued features χ ∈ R N ×(m×3) .Likewise, each edge in G is described by scalar features E ∈ R E×e and x vector-valued features ξ ∈ R E×(x×3) .Then, let M = [X, H] represent the molecules (i.e., atom coordinates and atom types) our method is tasked with generating, where [•, •] denotes the concatenation of two variables.Important to note is that the input features H and E are invariant to 3D roto-translations, whereas the input vector features X, χ and ξ are equivariant to 3D roto-translations.Lastly, in particular, we design a denoising neural network Φ to be equivariant to 3D rototranslations (i.e., SE (3)-equivariant) by defining it such that its internal operations and outputs match corresponding 3D roto-translations acting upon its inputs.

Overview of GCDM
We will now introduce GCDM, a new Geometry-Complete SE(3)-Equivariant Diffusion Model.GCDM defines a joint noising process on equivariant atom coordinates x and invariant atom types h to produce a noisy representation z = [z (x) , z (h) ] and then learns a generative denoising process using the newly-proposed GCPNet++ model (see Section A.2 of the appendix), which desirably contains two distinct feature channels for scalar and vector features, respectively, and supports geometry-complete and chirality-aware message-passing [56].
As an extension of the DDPM framework [57] outlined in Appendix B.1, GCDM is designed to generate molecules in 3D while maintaining SE(3) equivariance, in contrast to previous methods that generate molecules solely in 1D [58], 2D [59], or 3D modalities without considering chirality [9,25].GCDM generates molecules by directly placing atoms in continuous 3D space and assigning them discrete types, which is accomplished by modeling forward and reverse diffusion processes, respectively: Overall, these processes describe a latent variable model p Φ (z 0 ) = p Φ (z 0:T )dz 1:T given a sequence of latent variables z 0 , z 1 , . . ., z T matching the dimensionality of the data M ∼ p(z 0 ).As illustrated in Figure 1, the forward process (directed from right to left) iteratively adds noise to an input, and the learned reverse process (directed from left to right) iteratively denoises a noisy input to generate new examples from the original data distribution.We will now proceed to formulate GCDM's joint diffusion process and its remaining practical details.

Joint Molecular Diffusion
Recall that our model's molecular graph inputs, G, associate with each node a 3D position x i ∈ R 3 and a feature vector h i ∈ R h .By way of adding random noise to these model inputs at each time step t via a fixed, Markov chain variance schedule σ 2 1 , σ 2 2 , . . ., σ 2 T , we can define a joint molecular diffusion process for equivariant atom coordinates x and invariant atom types h as the product of two distributions [25]: where N xh serves as concise notation to denote the product of two normal distributions; the first distribution, N x , represents the noised node coordinates; the second distribution, N h , represents the noised node features; and α t = 1 − σ 2 t following the variance preserving process of Ho et al. [57].With α t|s = α t /α s and σ 2 t|s = σ 2 t − α t|s σ 2 s for any t > s, we can directly obtain the noisy data distribution q(z t |z 0 ) at any time step t: Bayes Theorem then tells us that if we then define µ t→s (z t , z 0 ) and σ t→s as we have that the inverse of the noising process, the true denoising process, is given by the posterior of the transitions conditioned on M ∼ z 0 , a process that is also Gaussian [25]:

Parametrization of the Reverse Process
Noise parametrization.We now need to define the learned generative reverse process that denoises pure noise into realistic examples from the original data distribution.Towards this end, we can directly use the noise posteriors q(z s |z t , z 0 ) of Eq.B12 in the appendix after sampling z 0 ∼ (M = [x, h]).However, to do so, we must replace the input variables x and h with the approximations x and ĥ predicted by the denoising neural network Φ: where the values for z0 = [x, ĥ] depend on z t , t, and the denoising neural network Φ. GCDM then parametrizes µ Φt→s (z t , z0 ) to predict the noise ε = [ε (x) , ε(h) ], which represents the noise individually added to x and ĥ.We can then use the predicted ε to derive: z0 = [x, ĥ] = z t /α t − εt • σ t /α t .
(5) Invariant likelihood.Ideally, we desire for a 3D molecular diffusion model to assign the same likelihood to a generated molecule even after arbitrarily rotating or translating it in 3D space.To ensure the model achieves this desirable property for p Φ (z 0 ), we can leverage the insight that an invariant distribution composed of an equivariant transition function yields an invariant distribution [9,25,27].Moreover, to address the translation invariance issue raised by Satorras et al. [27] in the context of handling a distribution over 3D coordinates, we adopt the zero center of gravity trick proposed by Xu et al. [9] to define N x as a normal distribution on the subspace defined by i x i = 0.In contrast, to handle node features h i that are invariant to roto-translations, we can instead use a conventional normal distribution N .As such, if we parametrize the transition function p Φ using an SE(3)-equivariant neural network after using the zero center of gravity trick of Xu et al. [9], the model will have achieved the desired likelihood invariance property.

Geometry-Complete Denoising Network
Crucially, to satisfy the desired likelihood invariance property described in Section 3.4 while optimizing for model expressivity and runtime, GCDM parametrizes the denoising neural network Φ using GCPNet++, an enhanced version of the SE(3)equivariant GCPNet algorithm [56] , that we propose in Section A.2 of the appendix.Notably, GCPNet++ learns both scalar (invariant) and vector (equivariant) node and edge features through a chirality-sensitive graph message passing procedure, which enables GCDM to denoise its noisy molecular graph inputs using not only noisy scalar features but also noisy vector features that are derived directly from the noisy node coordinates z (x) (i.e., ψ(z (x) )).We empirically find that incorporating such noisy vectors considerably increases GCDM's representation capacity for 3D graph denoising.

Optimization Objective
Following previous works on diffusion models [25,32,57], the noise parametrization chosen for GCDM yields the following model training objective: where εt is the denoising network's noise prediction for atom types and coordinates as described above and where we empirically choose to set w(t) = 1 for the best possible generation results.Additionally, GCDM permits a negative log-likelihood computation using the same optimization terms as Hoogeboom et al. [25], for which we refer interested readers to Appendices B.2, B.3, and B.4 of the appendix.

Discussion & Conclusions
While previous methods for 3D molecule generation have possessed insufficient geometric and molecular priors for scaling well to a variety of molecular datasets, in this work, we introduced a geometry-complete diffusion model (GCDM) that establishes a clear performance advantage over previous methods, generating more realistic, stable, valid, unique, and property-specific 3D molecules, while enabling the generation of many large 3D molecules that are energetically stable as well as chemically and structurally valid.Moreover, GCDM does so without complex modeling techniques such as latent diffusion, which suggests that GCDM's results could likely be further improved by expanding upon these techniques [33].Although GCDM's results here are promising, since it (like previous methods) requires fully-connected graph attention as well as 1,000 time steps to generate a high-quality batch of 3D molecules, using it to generate several thousand large molecules can take a notable amount of time (e.g., 15 minutes to generate 250 new large molecules).As such, future research with GCDM could involve adding new time-efficient graph construction or sampling algorithms [60] or exploring the impact of higher-order (e.g., type-2 tensor) yet efficient geometric expressiveness [61] on 3D generative models to accelerate sample generation and increase sample quality.Furthermore, integrating additional external tools for assessing the quality and rationality of generated molecules [62] is a promising direction for future work.
In this setting, with (h i ∈ H, χ i ∈ χ, e ij ∈ E, ξ ij ∈ ξ), GCPNet++, our enhanced version of GCPNet, consists of a composition of Geometry-Complete Graph Convolution (GCPConv) layers (h l i , χ l i ), ] which are defined as: where n l i = (h l i , χ l i ); ϕ l is a trainable function; l signifies the representation depth of the network; A is a permutation-invariant aggregation function; Ω ω represents a messagepassing function corresponding to the ω-th GCP message-passing layer [56]; and node i's geometry-complete local frames are x t i ×x t j ∥x t i ×x t j ∥ , and c t ij = a t ij × b t ij , respectively.Importantly, GCPNet++ restructures the network flow of GCPConv [56] for each iteration of node feature updates to simplify and enhance information flow, concretely from the form of for all t > s with α t|s = α t /α s and σ 2 t|s = σ 2 t − α 2 t|s σ 2 s .In total, then, we can write the noising process as: q(z 0 , z 1 , ..., z T |x) = q(z 0 |x) T t=1 q(z t |z t−1 ). (B11) If we then define µ t→s (x, z t ) and σ t→s as x and σ t→s = σ t|s σ s σ t , we have that the inverse of the noising process, the true denoising process, is given by the posterior of the transitions conditioned on x, a process that is also Gaussian: The Generative Denoising Process.In diffusion models, we define the generative process according to the true denoising process.However, for such a denoising process, we do not know the value of x a priori, so we typically approximate it as x = ϕ(z t , t) using a neural network ϕ.Doing so then lets us express the generative transition distribution p(z s |z t ) as q(z s |x(z t , t), z t ).As a practical alternative to Eq. B12, we can represent this expression using the approximation for x: If we choose to define s as s = t − 1, then we can derive the variational lower bound on the log-likelihood of x given the generative model as: where we note that L 0 = log p(x|z 0 ) models the likelihood of the data given its noisy representation z 0 , L base = −KL(q(z T |x)|p(z T )) models the difference between a standard normal distribution and the final latent variable q(z T |x), and Note that, in this formation of diffusion models, the neural network ϕ directly predicts x.However, Ho et al. [57] and others have found optimization of ϕ to be made much easier when instead predicting the Gaussian noise added to x to create x.An intuition for how this changes the neural network's learning dynamics is that, when predicting back the noise added to the model's input, the network is being trained to more directly differentiate which part of z t corresponds to the input's feature signal (i.e., the underlying data point x) and which part corresponds to added feature noise.In doing so, if we let z t = α t x + σ t ϵ, the neural network can then predict ε = ϕ(z t , t) such that: Kingma et al. [65] and others have since shown that, when parametrizing the denoising neural network in this way, the loss term L t reduces to: Note that, in practice, the loss term L base should be close to zero when using a noising schedule defined such that α T ≈ 0.Moreover, if and when α 0 ≈ 1 and x is a discrete value, we will find L 0 to be close to zero as well.

B.2 Zeroth Likelihood Terms for GCDM Optimization Objective
For the zeroth likelihood terms corresponding to each type of input feature, we directly adopt the respective terms previously derived by Hoogeboom et al. [25].Doing so enables a negative log-likelihood calculation for GCDM's predictions.In particular, for integer node features, we adopt the zeroth likelihood term: where we use the CDF of a standard normal distribution, Φ, to compute Eq.B17 as Φ((h + 1 2 − z 0 )/σ 0 ) ≈ 1 for reasonable noise parameters α 0 and σ 0 [25].For categorical node features, we instead use the zeroth likelihood term:  QM9 models and GEOM-Drugs models can likely be accelerated using techniques such as DDIM sampling [60].However, we have not officially validated the quality of generated molecules using such sampling techniques, so we caution users to be aware of this potential risk of degrading molecule sample quality when using such sampling algorithms.

C.4 Reproducibility
On GitHub, we thoroughly provide all source code, data, and instructions required to train new GCDM models or reproduce our results for each of the four proteinindependent molecule generation tasks we study in this work.The source code, data, and instructions for our protein-conditional molecule generation experiments are also available on GitHub.Our source code uses PyTorch [72] and PyTorch Lightning [71] to facilitate model training; PyTorch Geometric [73] to support sparse tensor operations on geometric graphs; and Hydra [74] to enable reproducible hyperparameter and experiment management.

C.5.1 Property-Guided 3D Molecule Optimization -QM9
In Table C1, for completeness, we list the numeric molecule optimization results comprising Figure 6 in Section 2.4.

Fig. 6 :
Fig.6: Comparison of GCDM with baseline methods for property-guided 3D molecule optimization.The results are reported in terms of molecular stability (M S) and the MAE for molecular property prediction by an ensemble of three EGNN classifiers ϕ c (each trained on the same QM9 subset using a distinct random seed) yielding corresponding Student's t-distribution 95% confidence intervals, with results listed for EDM and GCDM-optimized samples as well as the molecule generation baseline ("Initial Samples").Note that x denotes a missing bar representing outlier property MAEs greater than 50.Alternatively, tabular results are given in TableC1of the appendix.

Table 1 :
1, we see that GCDM achieves the highest percentage of probable (NLL), valid, and unique molecules compared to all baseline methods, with AS and MS results marginally lower than those of GeoLDM yet Comparison of GCDM with baseline methods for 3D molecule generation.The results in the top half of the table are reported in terms of the negative log-likelihood (NLL) -log p(x, h, N ), atom stability, molecule stability, validity, and uniqueness of 10,000 samples drawn from each model, with standard deviations (±) for each model across three runs on QM9.The results in the bottom half of the table are for methods specifically evaluated across five runs on QM9 using Student's tdistribution 95% confidence intervals for per-metric errors, additionally with novelty (Novel) defined as the percentage of (valid and unique) generated molecule SMILES strings that were not found in the QM9 dataset and PoseBusters validity (PB-Valid) defined as the percentage of generated molecules that pass all relevant de novo structural and chemical sanity checks listed in Section 2.1.The top-1 (best) results for this task are in bold, and the second-best results are underlined, with -denoting a metric value that is not available.

Table 2 :
Comparison of GCDM with baseline methods for property-conditional 3D molecule generation.The results in the top half of the table are reported in terms of the MAE for molecular property prediction by an EGNN classifier ϕ c on a QM9 subset, with results listed for GCDM-generated samples as well as for four separate baseline methods.

Table 3 :
Comparison of GCDM with baseline methods for 3D molecule generation.
The results in the top half of the table are reported in terms of each method's negative log-likelihood, atom stability, and molecule stability with standard deviations (±) across three runs on GEOM-Drugs, each drawing 10,000 samples from the model.The results in the bottom half of the table are for methods specifically evaluated across five runs on QM9 using Student's t-distribution 95% confidence intervals for per-metric errors, additionally with validity and uniqueness (Val and Uniq), novelty (Novel), and PoseBusters validity (PB-Valid) defined likewise as in Section 2.1; The top-1 (best) results for this task are in bold, and the second-best results are underlined.