Cartesian atomic cluster expansion for machine learning interatomic potentials

Machine learning interatomic potentials are revolutionizing large-scale, accurate atomistic modelling in material science and chemistry. Many potentials use atomic cluster expansion or equivariant message passing frameworks. Such frameworks typically use spherical harmonics as angular basis functions, and then use Clebsch-Gordan contraction to maintain rotational symmetry, which may introduce redundancies in representations and computational overhead. We propose an alternative: a Cartesian-coordinates-based atomic density expansion. This approach provides a complete set of polynormially indepedent features of atomic environments while maintaining interaction body orders. Additionally, we integrate low-dimensional embeddings of various chemical elements and inter-atomic message passing. The resulting potential, named Cartesian Atomic Cluster Expansion (CACE), exhibits good accuracy, stability, and generalizability. We validate its performance in diverse systems, including bulk water, small molecules, and 25-element high-entropy alloys.

Machine learning interatomic potentials (MLIPs) can learn from quantum-mechanical calculations and predict the energy and forces of atomic configurations speedily, thus enabling more precise and comprehensive exploration of material and molecular properties at scale [1,2].Crucially, while atomic Cartesian coordinates encode all the essential information of a structure, they cannot be directly used in learning tasks, due to the lack of symmetries like translation, rotation, inversion, and atom permutations.Consequently, numerous methods have been developed to enforce these symmetries [2,3].
In particular, atomic cluster expansion (ACE) [4] represents atomic environments based on a body-order expansion, using a complete set of bases of spherical harmonics and radial components.ACE can be viewed as a general framework of many other representations of the atomic environment, such as the Atom Centered Symmetry Functions (ACSF) [5], Smooth Overlap of Atomic Positions (SOAP) descriptor [6], Moment Tensor Potentials (MTPs) [7], and bispectrums [8].
Under the hood, in both ACE and E(3) equivariant MPNNs, the key process is the following: Equivariant features or messages of body order ν are encoded in spherical harmonics with degrees l.The invariant features are subsequently created by contracting the equivariant ones via the Clebsch-Gordan coefficients.These invariants are used to predict energies or other invariant physical quantities.The use of spherical harmonics is motivated by that they forms a natural basis for the irreducible representations of the SO(3) group and thus is useful in operations involving rotational symmetries.Since the original ACE work [4], various improvements have been made, e.g., efficient implementations of spherical harmonics that are evaluated in Cartesian coordinates are made [21][22][23], absorbing Clebsch-Gordan matrix in expansion coefficients during the evaluation stage [24], recursive Clebsch-Gordan coupling [18] or using Gaunt coefficients [25].Moreover, the original formulation has a steep scaling as the number of elements increases, and tensor-reduced representations have been developed to combat this [26].As the degree l and the body order ν increases, many invariant features are linearly dependent or polynomially depedent, and schemes to eliminate these have been developed [27][28][29].
On the other hand, MTP [7] can be considered as ACE formulated in the Cartesian space.MTP uses the socalled moment tensors, and then contracts them to form rotationally invariant basis functions [7].The MTP contraction yields a spanning set, but contains many linearly dependent basis functions [7,27].In a similar vain, proper orthogonal descriptors were introduced as compact orthogonal basis functions formulated in the Cartesian space [30].
Here, we propose an alternative method that performs the expansion of atomic density as well as the contraction to get polynormially indepedent invariant features in Cartesian coordinates directly, circumventing spherical harmonics.The Cartesian atomic cluster expansion (CACE) has the same mathematical foundation as the ACE framework, so it also adopts the nice properties of ACE: body order and completeness in the description of atomic environments [27].The CACE invariant features are low-dimensional and can be computed independently and efficiently.By further incorporating a lowdimensional embedding for chemical elements, optimized radial channel coupling, and message passing between neighboring atoms, we develop the CACE potential.We benchmark CACE on diverse systems, with an emphasis on stability and extrapolation.

A. The CACE architecture
Atomic graph As illustrated in Fig. 1, each atomic structure is represented as a graph, with atoms as nodes, and directed edges that point from pairs of sender atoms to receiver atoms that are within a cutoff radius of r cut .The state of an atom i is denoted by the tuple σ i = (r i , θ i , h i ): r i is its Cartesian position, θ i encodes its chemical element α, and h i is the learnable features.Later we will use the superscript, e.g.σ (t) i , to denote the atomic state in the message passing layer t, but for now we omit the layer-dependence indices for simplicity.
For both the sender node and the receiver node of an edge, elements α i are first assigned to vectors of lengths equal to the total number of elements N element through a one-hot embedding, and then the one-hot vectors are multiplied with a learnable weight matrix of size N embedding × N element that outputs a learnable embedding θ i for each atom i of length N embedding .Typically, N embedding is selected to be a small number between 1 and 4.This continuous element embedding eliminates the unfavorable scaling of using discrete chemical element types when representing atomic environments, and also allows alchemical learning.
Edge basis The edge basis, shown in Fig. 1b, is used to describe the spatial arrangement of an atom j around the atom i (⃗ r ji ≡ r j − r i , r ji = |⃗ r ji | and rji = ⃗ r ji /r ji ), and is formed by the product of a radial basis R, an angular basis L, and an edge-type basis T : Eqn. ( 1) is general and fundamental for many MLIPs, and a systematic discussion of the design choices for each term can be found in Ref. [31].
In CACE, the type of edge T depends on the states of the two nodes it connects.In the initial layer, the edge states are only determined by the chemical elements of the two atoms i and j.The edge is encoded using the flattened tensor product of the two node embedding vectors, T = θ i ⊗ θ j , resulting in an edge feature of length c = N 2 embedding .In a later message passing layer t, the edge features can depend on the node hidden features i .This edge encoding formed by the tensor product can be interpreted as the attention mechanism [32] between the two atoms, with the embedding vectors of the sender and the receiver nodes the query and the key.
The orientation-dependent angular basis is where l ≡ (l x , l y , l z ) are the angular momentum numbers satisfying l x + l y + l z = l, with l being the total angular momentum.Such angular basis is just spherical harmonic functions in Cartesian basis.Indeed, one can easily transform the function of a certain angular momentum number l in Cartesian and spherical harmonic bases by a simple matrix multiplication [33].As examples of operations in the Cartesian angular basis, one can add two vectors such as where the prefactor where and the sum is over all l with l x + l y + l z = l.One can prove the relationships in Eqns.( 3) and (4) above using the multinomial formula.R n,cl (r ji ) is a set of radial basis consisting of n functions, coupled with the total angular momentum l and the edge channel c.CACE first uses a set of raw radial basis R ñ(r ji ) consisting of ñ functions, which can be a set of trainable Bessel functions with different frequencies [15] multiplied by a smooth cutoff function.The raw radial basis is mixed using a linear transformation to form R n,cl for each c and l, where W ñn,cl is a ñ × n-sized matrix indexed by c and l.This means that, for each l and c combination, there is a different set of n radial functions.In practice, the mixing is performed on the atom-centered basis for efficiency, as described below.
Atom-centered basis (A-basis) As shown in Fig. 1c, the atom-centered representation is made by summing over all the edges of a node, This corresponds to a projection of the edge basis on the atomic density, which is commonly referred to as the "density trick" and was introduced originally to construct SOAP and bispectrum descriptors [8].
Symmetrized basis (B-basis) The orientationdependent A features are symmetrized to get the invariant B features of different body orders ν as sketched in Fig. 1c.For l = 0, A is already rotationally invariant, so the ν = 1 features B (1) Doing the expansion explicitly and using Eqn.
(3), one can show that [34] So this term includes three-body contributions of ⃗ r ji and ⃗ r ki together with their enclosed angle θ ijk .For ν = 3, B which includes four-body interaction terms in the form i,cnl1l2l3 = l1,l2,l3 which includes five-body contributions in the form The general rule for constructing the symmetrized basis is illustrated in Fig. 1i.The key idea is that pairs of angular features L need to have a shared factor of l, when summed over to form invariants. To eliminate linearly dependent terms, the combination of the angular features that only differ from others by permutations are not considered.Moreover, the shared factor l between any pairs of L needs to be greater than zero, because if any l = 0 the resulting invariant can be constructed using the product of lower-order invariants and can thus be eliminated for redundancy.In other words, a valid combination illustrated in Fig. 1i needs to correspond to one connected graph, rather than two disconnect graphs of lower body orders.For instance, terms such as are eliminated.
The CACE framework thus makes polynomially independent invariants.Other radial or edge terms that do not depend on l or only depend on the total angular momentum l can be multiplied on top without breaking the rotational symmetry.Based on the expansions, one can see that these CACE B (ν) terms are similar to the B (ν) terms in the original ACE paper [4] or the invariant basis functions in MTP [7] , with two differences: First, many linearly or polynomially-dependent features are elimiated during the symmetrization stage in CACE, resulting in a much more compact basis set than the original ACE [4] or MTPs [7], where the basis functions span the space of all invariants including linearly dependent terms [27].Second, since each radial channel R n,cl contains all raw radial channels with trainable weights (Eqn.( 5)), it is not necessary to couple different radial channels together when forming the basis B. This means that the number of radial channels of B stays n rather than going up as n ν as the body order increases.
The B basis is a complete basis of permutation and rotation invariant functions of the atomic environment [4].In practice, one truncates the series up to some maximum values of l max and ν max to make the computation tractable.The dimension of the B basis is c × n × N L , where each factor comes from the number of edge channels, radial basis, and angular basis, respectively.Fig. 2 shows the total number of angular features N L up to certain l max and ν max .The number of N l is compact even for ν max = 5.For most practical applications, l max between 2 and 4 and ν max between 2 and 4 are sufficient for good accuracy when fitting CACE potentials.
Notice that in the CACE formulation, each invariant B feature can be evaluated independently, and there is no need to precompute or store the high-dimensional manybody product basis in ACE [4] or moment tensors in MTPs [7] even in the sparse form.The reason why this is efficient can be understood as it effectively exploits the extreme sparsity of the Clebsch-Gordan coupling matrix when forming invariants.
Message passing The framework above reproduces the original ACE, although formulated entirely in Cartesian space.In ACE, the atomic energy of each atom is determined by neighboring atoms within its cutoff radius r cut .On top of that, one can incorporate message passing, which iteratively propagates information along the edges, thereby communicating the local information and effectively enlarging the perceptive field of each atom.A systematic discussion on the design choices of message passing can be found in Refs.[31,35].
CACE uses two types of message passing (MP) mechanisms.The first type, denoted by the green arrow in Fig. 1e, communicates the A information between neighboring atoms: the orientation-dependent message from atom j to i can be expressed as: where F cnl (r ji ) is a filter function that takes the scalar value of the distance r ji , and it can depend on edge channel, radial channel and total angular momentum.In practice, we use an exponential decay function with trainable exponent, to capture the physics that farther atoms have less effect, although other choices are possible.
The second MP mechanism, denoted by the two blue arrows in Fig. 1e, uses a recursive edge embedding scheme analogous to recursively embedded atom neural networks (REANN) [34] and multilayer ACE (ml-ACE) [36]: where H cl is a trainable scalar function that can optionally depend on edge channel and total angular momentum.We simply use a linear neural network (NN) layer for H.The aggregated message is One can then update the A representation of each node, and for G we use a simple linear function.After obtaining the updated A (t+1) , we again take the symmetrized B-basis as previously described.It is easy to verify that these symmetrized representations are dependent only on the scalar values of the atomic distances and angles and thus rotationally invariant.Note that one can make various design choices for the F , H and G functions: they can be dependent on c, n, and l, without breaking the invariance at the symmetrization stage.
In practice, only one or no MP layer is typically used in CACE.This is because the use of higher-body-order atomic features and messages reduces the need for many MP layers, as in the case of MACE [18].
Readout and output After T MP layers, all the resulting B features at each layer are concatenated.These invariant features are compact; the number of the features can be computed as T × c × n × N L .Then, a multilayer perceptron (MLP) maps these invariant features to the target of the atomic energy of each atom i, In practice, for the readout function, we use the sum of a linear NN and a multilayer NN, as the former preserves the body-ordered contributions and the latter considers the higher-ordered terms from the truncated series.Finally, the total potential energy of the system is the sum of all the atomic energies of each atom, and the forces can readily be computed by taking the derivatives of the total energy with respect to atomic coordinates.

B. Bulk water
As an example application on complex fluids, we demonstrate CACE on a dataset of 1,593 liquid water configurations [37].The details of the training are in the Methods section.The root mean square errors (RMSE) of the energy per water molecule and forces on the validation set for the CACE model and other MLIPs are presented in Table I.MLIPs based on three-body and three-body descriptors without message passing, including BPNN based on ACSF [5], embedded atom neural network (EANN) [39], and DeepMD have similar errors, and are amongst the highest in this comparison.Linear ACE uses higher-body order features, which can be directly compared to the CACE model trained with the same cutoff and without message passing (r cut = 5.5 Å, T = 0), and the latter has lower error.The REANN potential [34] with three message passing layers, the equivariant message passing model NequIP [17], the high body order MACE [18] model, and the CACE model with T = 1 have the lowest errors.
For running molecular dynamics (MD) simulations using MLIPs, besides accuracy, a crucial requirement is stability, the possibility to run long enough trajectories with the system remaining in physically reasonable states and the simulation not exploding, even at conditions with few or no training data.To assess the simulation stability, we performed independent simulations of 300 ps in length for water at 1 gmL −1 at 300 K, 500 K, 1000 K, 1500 K, and 2000 K using the CACE T = 1 model.The simulation cell contains 512 water molecules.The time step was 1 femtosecond and the Nosé-Hoover thermostat was used.The upper panel of Fig. 3 shows the oxygen-oxygen (O-O) radial distribution function (RDF).At 300 K, the computed O-O RDF is in excellent agreement with experiment from X-ray diffraction measurements [43].Note that the nuclear quantum effects have a slight destructuring effect in the O-O RDF, but the effect is quite small [37].The mean squared displacements (MSD) from these simulations are shown on the lower panel of Fig. 3.The diffusivity D, obtained by fitting a linear function to the MSD, at 300 K agrees well with the DFT MD result (D DFT = 2.67 ± 0.10 Å2 ps −1 ) from Ref. [44].At high temperatures up to 2000 K, the MD simulations remained stable, which demonstrates the superior stability of the CACE potential.Note that we do not expect the result at very high temperatures to be physically predictive, and this example is just to illustrate the stability of CACE.Such stability is not only necessary for converging statistical properties, but also allows the refinement of the potential: to iteratively improve the accuracy of the potential under certain conditions, one can perform active learning which collects snapshots from corresponding MD trajectories and use these snapshots for re-training.

FIG. 3.
Simulation results of water using CACE T = 1 model.a Oxygen-oxygen radial distribution functions (RDF) at different temperatures and 1 g/mL computed via classical MD simulations in the NVT ensemble.The experimental O-O RDF at ambient conditions was obtained from Ref [43].b Mean squared displacement (MSD) from the liquid water simulations.Diffusivities (D) are shown in the legends.

C. Small molecules: ethanol and 3BPA
To demonstrate CACE on small organic molecules, we fitted to 1,000 ethanol structures from the MD17 database [45] that is often used for benchmark purposes.Table II shows a comparison of the different MLIPs trained on energies and forces for the same MD17-ethanol dataset.The energy accuracy of CACE is similar to the NequIP model with T = 5, although the CACE force error is larger.The stability (S) in Table II measures how TABLE I. Root mean square errors (RMSE) for energy (E) per water molecule (in meV/H2O) and force (in meV Å−1 ) on a liquid water dataset from Ref. [37].The cutoffs rcut of the atomic environment and the numbers of message passing layers T are listed.The NequIP and DeepMD models are trained by us, and the others are from the references.
BPNN [37] DeepMD [38] EANN [39] linear ACE [40] REANN [34] NequIP [17]  long MD simulations at 500 K using the MLIP is physically sensical (no pathological behavior or enter physically prohibitive states) in runs with a maximum length of 300 ps, so 300 ps is the maximum score in this case [48].The stability values for other MLIPs are from Ref. [48] which used 10,000 training structures and a time step of 0.5 fs in MD.For CACE, we used 1,000 training structures and a time step of 1 fs, which is more stringent, but the MD had nevertheless remained stable.As observed in Refs.[48,49], force accuracy does not imply stability, and the latter is a key metric for MLIPs.
Table III shows the errors of different potentials when extrapolating to higher temperature.NequIP and MACE perform the best, while CACE is somewhere between them and ACE.We then compared the dihedral potential energy surface of the molecule.Fig. 4 shows the complex energy landscapes of the α-γ plane predicted by DFT and by CACE, for the case of β = 120 • , the plane with the fewest training points.Nevertheless, the CACE landscape closely resembles the ground truth and is regular, more so than the other potentials benchmarked in Ref. [52] despite those were trained on a more diverse dataset including high temperature configurations.

D. 25-element high-entropy alloys
High-entropy alloys, which are composed of several metallic elements, have unique mechanical, thermodynamics, and catalytic properties [53].They are challenging to model due to the high number of elements involved, for which many MLIPs scale poorly with.Ref. [54] introduced a 25 d-block transition metal HEA dataset of distorted crystalline structures.Ref. [54] further performed an alchemical learning (AL) that explicit compresses chemical space, leading to a model that is both interpretable and very stable.
In Fig. 5a, the CACE learning curve is compared to the ones from Ref. [54], including a 3B model that has an atomic energy baseline and three-body terms, and another 3B+NN model that further incorporates a full set of pair potentials and a non-linear term built on top of contracted power spectrum features.Overall, the CACE model achieved lower error and significantly more efficient learning for fewer data, and it has only 91,232 trainable parameters, compared to more than 160,000 in the 3B+NN model [54].
The errors of the models are presented in Table IV.A very recent architecture, Point Edge Transformer (PET) that utilizes local coordinate systems, achieved stateof-the-art performance in several benchmark systems and also has impressively low test error for the HEA25 dataset [55].However, as we discuss below, the PET potential has poor transferability.
Alchemical learning capability not only makes learning more efficient, but is also essential towards achieving a foundation model that is valid across the periodic table.CACE is able to perform alchemical learning.The learnable embedding θ for each element type encodes its chemical information, and can be visualized to gain insights into the data-driven similarities.As the N embedding = 4 in this case, we performed a principal component analysis (PCA) and plotted the first two principal component axes in Fig. 5b.The elements are arranged in a way that is strongly reminiscent of their placement in the d-block.This shows that the element embedding scheme is able to learn the nature of the periodic table in a data-driven way.
A stringent way to demonstrate alchemical learning is by extrapolating to unseen elements.In this case, Re and Os are absent from the training set.Ref. [54] provides a hold-out set containing 61 pairs of structures, each of 36 or 48 atoms.Each pair has one fcc or bcc structure with only the original 25 elements, and another structure with up to 11 atoms randomly substituted with Re and Os.We simply took the CACE model trained for the 25element dataset and obtained the chemical embedding of Re and Os by a linear interpolation between W and Ir, as illustrated by the empty circles in Fig. 5b.We also added atomic-energy baselines for Re and Os (obtained by fitting the residual energy to a two-parameter linear model that depends only on the Re and Os content).In Fig. 5d we show the parity plot for the substitution energy, defined as the potential energy difference between the original and the substituted structure per substitution atom.Impressively, the CACE error for those substituted structures with unseen elements is rather similar to the test error on the 25-element training set, and is lower than the AL method in Ref. [54].The force comparison, shown in the inset of Fig. 5d, shows the same trend.With the PET potential, it is not clear whether alchemical learning can be performed in a straightforward way.
The most impressive validation of the AL model in Ref. [54] is the extrapolation to configurations very far from the training set: At 5000 K, the systems gener-SchNet [9] DimeNet [46] sGDML [47] PaiNN [15] SphereNet [11] NewtonNet [13] GemNet-T [12] NequIP [17] II.Mean absolute error (MAE) of energy (E) and force (F) for ethanol in the MD-17 dataset, in units of meV and meV Å−1 , respectively, trained on 1000 reference configurations.The stability (S) in the unit of ps shows how long the simulation using the MLIP is physically realistic in MD of a maximum length of 300 ps at 500 K.Besides CACE, the errors are obtained from respective references, and the stability is from Ref. [48].
ACE [4] sGDML [47] GAP [6]  ated from constant-pressure MD simulations with Monte Carlo element swaps have melted, but the AL potential that was trained exclusively on solid structures is still able to describe the energetics of the structures.This is a test that the PET model [55] did not perform well.To verify the generalizability of CACE across a wide temperature range, we recomputed the energies and forces of the structures from the 5000 K trajectory as well as another one at a low temperature of 300 K, and show the results in Fig. 5c and Table IV.The CACE errors are a bit smaller compared to Ref. [54], and quite modest compared to the training error.

II. DISCUSSION
There have been other MLIP approaches that only operate in Cartesian space, including NewtonNet [13], EGNN [14], and PaiNN [15] that use atomic displacement vectors in message passing, the PET based on local coordinate systems [55], and TensorNet that uses Cartesian tensors to represent pairwise displacements [56].CACE is different from these approaches in that it can be directly mapped to ACE, so like ACE, it is also complete and body-ordered.The completeness helps the accuracy and the smoothness of the potential, particularly when there exist atomic environments that are degenerate using three-body descriptors or bispectrum [57,58].The body order makes the potential more physically interpretable and may help extrapolation.The angular basis of CACE (Eqn.( 2)) is the same as MTP [7], but CACE avoids computing moment tensors, and form a more compact symetric basis set via a simple combination rule that eliminates polynomially depedent features (Fig. 1i).As added advantages, the CACE descriptors have optimized radial channel coupling, and are capable of alchemical learning.
A topic that tends to be overlooked in the development of MLIP methods is the stability and generalizability.In practice, a MLIP that is not stable in MD simulations is not useful.Nonetheless, stability and generalization are rarely considered in benchmarks, maybe due to that they are less straightforward to assess than energy and force errors.On the other hand, it has been shown that accuracy of the potential does not imply stability [48,49].The CACE potential shows high stability and extrapolatability in the datasets we benchmarked: stable MD simulations of water up to 2000 K, MD of ethanol, extrapolation to high temperature and dihedral scan of 3BPA, as well as generalization to the melt and unseen elements in high-entropy alloys.For water, the accuracy of CACE is on par with the most accurate potentials to date, NequIP [17] and MACE [18].For small molecules, CACE is less accurate than NequIP or MACE but more accurate than the other recent MLIPs that we compared to.For all practical purposes, the accuracy of CACE for the small molecules is probably enough, considering that the error is very small in absolute terms NequiIP [17]  and is much lower than the chemical accuracy.Moreover, CACE achieved such accuracy with only one message passing layer and relatively few training parameters, and its accuracy can probably be improved with more message passing layers, higher N embedding , ν max , l max , or a larger MLP at the readout.We implemented the CACE potential using PyTorch, and the code is available in https://github.com/BingqingCheng/cace.With the current implementation, the training time of the water and the HEA25 datasets took about two days on a vintage GeForce GTX 1080 Ti GPU card, and the small molecules can be trained on a laptop within one or two days.The training should be much faster on state-of-the-art GPUs.For the speed of MD, Table V compares the timing on one Nvidia A100 GPU using three water models in Table I.Note that such comparison depends very much on the specific models used (higher ν, l max , r cut , and T all add to the cost) as well as the implementation.CACE is marginally faster in this comparison and can drive simulations of 24,000 atoms.The ability to run such large system sizes on a single GPU speaks to the low memory consumption of CACE.The implementation of the code can be further optimized: The computation of the node features and the symmetrization step may be compiled and be made faster.The training and the prediction are available on one GPU and can be made more parallelizable.The MD simulations can be performed in the ASE package, but the LAMMPS interface implementation is currently absent.As CACE has a moderate receptive field, so it is in principle possible to make it scale well in MD simulations.
Many engineering aspects of the CACE potential can be further fine-tuned.For example, dataset normalization and internal normalization can have a large effect on training efficiency and outcome [31].Radial basis influences the learning efficiency [59], and its best choice is still an open question.Other choices of the F , H and G functions in the message passing may be used.One can also use the product of B (t) i ) and χ cnl in the message function, exploiting the relationship in Eqn.(3).The training protocol can also influence the outcome.
Other possible future improvements of CACE include pretraining on large datasets before fine-tuning the model.It should be possible to develop a foundation model that is generally valid in the periodic table [60].The alchemical learning capacity of CACE may facilitate this.
In summary, we propose a scheme to preserve the rotational symmetry of atomic environments with different body orders, and the operation is entirely in the Cartesian coordinates.The transformation to the spherical harmonic basis, tensor product of atomic basis, and the usage of high-dimensional Clebsch-Gordan coupling matrix are all avoided.The resulting symmeric features are low-dimensional and polynormially indepedent.As the symmetrization in the Cartesian coordinates is equivalent to the Clebsch-Gordan contraction, the scheme can be applied in other MLIPs that use the ACE framework (e.g.linear ACE [40], SNAP, SOAP-GAP [6], Jacobi-Legendre cluster expansion [61], ml-ACE [36], graph ACE [35]) , or E(3) equivariant message passing neural networks (e.g.NequIP [17], MACE [18], BOTNet [31], SEGNN [62], Allegro [51]).

Bulk water
The water dataset has 1,593 liquid water configurations, each of 64 molecules [37].It was computed at the revPBE0-D3 level of density functional theory (DFT).90% of the data are used in the training.We used a cutoff r cut = 5.5 Å, 6 Bessel radial functions and c = 12, l max = 3, ν max = 3, N embedding = 3, and no message passing (T = 0) or one message passing layer (T = 1).The fitted CACE models have relatively few parameters: 24,572 trainable parameters for the CACE water model with T = 0, and 69,320 trainable parameters for the model with T = 1.
In Table I, the NequIP and DeepMD models are trained by us, and the others are from the references.In addition, we also trained a MACE model with E RMSE=1.9 meV/H 2 O, F RMSE=46.8 meV Å−1 .as the original Behler-Parrinello neural network (BPNN) in Ref. [37] was trained using the RuNNer code [41], we also trained another one using N2P2 [42] and obtained E RMSE=5.8 meV/H 2 O, F RMSE=108 meV Å−1 .
Small molecules The ethanol data from the MD17 database [45] were sampled from DFT molecular dynamics simulations at 500 K.We used a cutoff r cut = 4 Å, 6 Bessel radial functions, c = 12, l max = 4, ν max = 3, N embedding = 4, and one message passing layer (T = 1).The model was trained on 1,000 structures randomly selected from the dataset (90% train/10% validation split), and tested on another 10,000 random structures.
25-element high-entropy alloys The HEA dataset from Ref. [54] consists of 25,630 distorted crystalline structures containing 36 or 48 atoms on bcc or fcc lattices.We took a test set containing 1,000 configurations, and used up to the rest of the 24,630 data points for training and validation.We used a CACE model with r cut = 4.5 Å, l max = 3, ν max = 3, N embedding = 3, and T = 1.

FIG. 1 .
FIG.1.Schematic of the CACE potential.a-h show each step of the operation, and i illustrates the rule for making rotationally invariant features.

5 FIG. 2 .
FIG.2.The total number of angular features NL, for maximum values of lmax and νmax.

FIG. 4 .
FIG. 4. The dihedral scan benchmark of the 3BPA molecule.a Molecular structure and three dihedral angles.b The dihedral potential energy landscape for β = 120 • , as predicted by DFT and CACE.The white dots on the DFT potential energy surface correspond to all the training configurations that have β between 100 • and 140 • .

TABLE III .
[52] mean square errors (RMSE) for energy (E) and force (F) on the 3BPA temperature transferability dataset, reported in units of meV and meV Å−1 .All models were trained on T = 300 K data.Results of previous models except for NequIP and MACE are from[52].ANI-2x was trained on 8.9 million structures, and ANI-pretrained was pretrained and fine-tuned on this dataset.

TABLE IV
. Mean absolute error (MAE) of energy (E) and force (F) for the HEA25 dataset, reported in units of meV and meV Å−1 .There are three sets of benchmarks on test error, alchemical extrapolation to unseen Re and Os substitute atoms, and temperature transferability to 5000 K.